From lindsay.mathieson at gmail.com Wed Sep 1 06:37:33 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Wed, 1 Sep 2021 14:37:33 +1000 Subject: [PVE-User] PBS Backup and iothreads Message-ID: Is it still impossible to backup disks with iothreads enabled? I ask because it seems to be working for me :) Have a Windows 10 VM * "VIRT IO SCSI single" controller * 2 disks, both with iothreads enabled. * 1 efi disk And it seems to be backing up (live) just fine to a PBS Server, latest updates on everything PVE Host - 7.01 PBS (Container) - 2.0-9 Thanks From a.lauterer at proxmox.com Wed Sep 1 09:56:48 2021 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Wed, 1 Sep 2021 09:56:48 +0200 Subject: [PVE-User] PBS Backup and iothreads In-Reply-To: References: Message-ID: On 9/1/21 6:37 AM, Lindsay Mathieson wrote: > Is it still impossible to backup disks with iothreads enabled? Yes, since Proxmox VE 6.1 [0] Cheers, Aaron [0] https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 > > I ask because it seems to be working for me :) Have a Windows 10 VM > > ?* "VIRT IO SCSI single" controller > ?* 2 disks, both with iothreads enabled. > ?* 1 efi disk > > And it seems to be backing up (live) just fine to a PBS Server, latest updates on everything > > PVE Host - 7.01 > > PBS (Container) - 2.0-9 > > > Thanks > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From lindsay.mathieson at gmail.com Wed Sep 1 10:22:54 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Wed, 1 Sep 2021 18:22:54 +1000 Subject: [PVE-User] PBS Backup and iothreads In-Reply-To: References: Message-ID: On 1/09/2021 5:56 pm, Aaron Lauterer wrote: >> Is it still impossible to backup disks with iothreads enabled? > > Yes, since Proxmox VE 6.1 [0] Excellent, thanks! I presume you mean yes, its is possible :) From chris at itg.uy Wed Sep 1 10:25:15 2021 From: chris at itg.uy (Chris Sutcliff) Date: Wed, 01 Sep 2021 08:25:15 +0000 Subject: [PVE-User] PBS Backup and iothreads In-Reply-To: References: Message-ID: Yes, per the 6.1 release notes: "Backup/Restore: VMs with IOThreads enabled can be backed up with Proxmox VE 6.1. Additionally, administrators can run scheduled backup jobs manually from the Datacenter in the GUI." Kind Regards Chris Sutcliff ??????? Original Message ??????? On Wednesday, September 1st, 2021 at 09:22, Lindsay Mathieson wrote: > On 1/09/2021 5:56 pm, Aaron Lauterer wrote: > > > > Is it still impossible to backup disks with iothreads enabled? > > > > Yes, since Proxmox VE 6.1 [0] > > Excellent, thanks! > > I presume you mean yes, its is possible :) > > pve-user mailing list > > pve-user at lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From a.lauterer at proxmox.com Wed Sep 1 10:42:15 2021 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Wed, 1 Sep 2021 10:42:15 +0200 Subject: [PVE-User] PBS Backup and iothreads In-Reply-To: References: Message-ID: On 9/1/21 10:22 AM, Lindsay Mathieson wrote: > On 1/09/2021 5:56 pm, Aaron Lauterer wrote: >>> Is it still impossible to backup disks with iothreads enabled? >> >> Yes, since Proxmox VE 6.1 [0] > > > Excellent, thanks! > > > I presume you mean yes, its is possible :) :D yeah, that's what I meant... > From f.cuseo at panservice.it Mon Sep 6 08:46:17 2021 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 6 Sep 2021 08:46:17 +0200 (CEST) Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility Message-ID: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> Good morning. Is PBS 2.0 compatible with PVE 6.4 ? Is possibile to use PBS 2.0 client with PVE 6.4 ? Thank you, Fabrizio From t.lamprecht at proxmox.com Mon Sep 6 09:37:53 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 6 Sep 2021 09:37:53 +0200 Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> Message-ID: Hi, On 06.09.21 08:46, Fabrizio Cuseo wrote: > Is PBS 2.0 compatible with PVE 6.4 ? Yes, there was no breaking protocol change so you can backup from a PVE 7 to a PBS 1 and from a PVE 6.4 to a PBS 2 > Is possibile to use PBS 2.0 client with PVE 6.4 ? > In theory yes, but in practice no, at least for the build .deb package we provide. You can build it for yourself from the source code, you may need to at least adapt the zstd library usage to a lower version again to be compatible with the one Debian Buster ships. FYI: PBS 1.1 will receive important bug fix and security updates until end of Q2 2022, so it still can be used just fine in PVE 6.4. cheers, Thomas From t.lamprecht at proxmox.com Mon Sep 6 09:37:53 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 6 Sep 2021 09:37:53 +0200 Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> Message-ID: Hi, On 06.09.21 08:46, Fabrizio Cuseo wrote: > Is PBS 2.0 compatible with PVE 6.4 ? Yes, there was no breaking protocol change so you can backup from a PVE 7 to a PBS 1 and from a PVE 6.4 to a PBS 2 > Is possibile to use PBS 2.0 client with PVE 6.4 ? > In theory yes, but in practice no, at least for the build .deb package we provide. You can build it for yourself from the source code, you may need to at least adapt the zstd library usage to a lower version again to be compatible with the one Debian Buster ships. FYI: PBS 1.1 will receive important bug fix and security updates until end of Q2 2022, so it still can be used just fine in PVE 6.4. cheers, Thomas From f.cuseo at panservice.it Mon Sep 6 10:12:04 2021 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 6 Sep 2021 10:12:04 +0200 (CEST) Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> ----- Il 6-set-21, alle 9:37, Thomas Lamprecht t.lamprecht at proxmox.com ha scritto: > Hi, > > On 06.09.21 08:46, Fabrizio Cuseo wrote: >> Is PBS 2.0 compatible with PVE 6.4 ? > > Yes, there was no breaking protocol change so you can backup from a PVE 7 to a > PBS 1 > and from a PVE 6.4 to a PBS 2 > >> Is possibile to use PBS 2.0 client with PVE 6.4 ? >> > > In theory yes, but in practice no, at least for the build .deb package we > provide. > You can build it for yourself from the source code, you may need to at least > adapt > the zstd library usage to a lower version again to be compatible with the one > Debian > Buster ships. > > FYI: PBS 1.1 will receive important bug fix and security updates until end of Q2 > 2022, > so it still can be used just fine in PVE 6.4. Thank you for you answers. I have seen that file restore with LVM needs 2.0 both server and client side, so i was asking to have this feature. Regards, Fabrizio > > cheers, > Thomas From f.cuseo at panservice.it Mon Sep 6 10:12:04 2021 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 6 Sep 2021 10:12:04 +0200 (CEST) Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> ----- Il 6-set-21, alle 9:37, Thomas Lamprecht t.lamprecht at proxmox.com ha scritto: > Hi, > > On 06.09.21 08:46, Fabrizio Cuseo wrote: >> Is PBS 2.0 compatible with PVE 6.4 ? > > Yes, there was no breaking protocol change so you can backup from a PVE 7 to a > PBS 1 > and from a PVE 6.4 to a PBS 2 > >> Is possibile to use PBS 2.0 client with PVE 6.4 ? >> > > In theory yes, but in practice no, at least for the build .deb package we > provide. > You can build it for yourself from the source code, you may need to at least > adapt > the zstd library usage to a lower version again to be compatible with the one > Debian > Buster ships. > > FYI: PBS 1.1 will receive important bug fix and security updates until end of Q2 > 2022, > so it still can be used just fine in PVE 6.4. Thank you for you answers. I have seen that file restore with LVM needs 2.0 both server and client side, so i was asking to have this feature. Regards, Fabrizio > > cheers, > Thomas From t.lamprecht at proxmox.com Mon Sep 6 11:20:05 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 6 Sep 2021 11:20:05 +0200 Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <08aff75e-b136-8709-1f8b-61756792f592@proxmox.com> On 06.09.21 10:12, Fabrizio Cuseo wrote: > I have seen that file restore with LVM needs 2.0 both server and client side, so i was asking to have this feature. It should work with a newer client (file restore daemon) already, IIRC; the server does not cares to much about what's in the data blocks it serves. - Thomas From t.lamprecht at proxmox.com Mon Sep 6 11:20:05 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 6 Sep 2021 11:20:05 +0200 Subject: [PVE-User] PBS 2.0 and PVE 6.4 compatibility In-Reply-To: <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> References: <1499493956.210463.1630910777406.JavaMail.zimbra@zimbra.panservice.it> <1990942947.213760.1630915924488.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <08aff75e-b136-8709-1f8b-61756792f592@proxmox.com> On 06.09.21 10:12, Fabrizio Cuseo wrote: > I have seen that file restore with LVM needs 2.0 both server and client side, so i was asking to have this feature. It should work with a newer client (file restore daemon) already, IIRC; the server does not cares to much about what's in the data blocks it serves. - Thomas From lists at merit.unu.edu Mon Sep 6 15:59:33 2021 From: lists at merit.unu.edu (mj) Date: Mon, 6 Sep 2021 15:59:33 +0200 Subject: [PVE-User] PBS 2.0 with pve 6.4 In-Reply-To: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> References: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> Message-ID: Hi, Not sure I understand what you're asking. We are using pve 6.3-6 with pbs 2.0-9. Is there an issue with pbs 2.x combined with pve 6.x? It seems to work here? MJ On 19/08/2021 17:58, Fabrizio Cuseo wrote: > Hello. > Someone is using proxmox backup server 2.X with the last PVE 6.4 ? > If yes, someone have installed on PVE 6.4, backup client 2.X using repository: > "deb http://download.proxmox.com/debian/pbs-client buster main " ? > > I would like to have LVM file restore, but now i don't want to upgrade my cluster. > > Regards, Fabrizio > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.cuseo at panservice.it Mon Sep 6 16:40:52 2021 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 6 Sep 2021 16:40:52 +0200 (CEST) Subject: [PVE-User] PBS 2.0 with pve 6.4 In-Reply-To: References: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <1308846921.231749.1630939252104.JavaMail.zimbra@zimbra.panservice.it> I need to have "file restore" with LVM formatted drive, so is not working with PBS 1.X I have upgraded to 2.0 PBS, but using it with PVE 6.4.X, i can't use file restore with LVM support because has been implemented on 2.x PBS (both server and client). ----- Il 6-set-21, alle 15:59, mj lists at merit.unu.edu ha scritto: > Hi, > > Not sure I understand what you're asking. > > We are using pve 6.3-6 with pbs 2.0-9. > > Is there an issue with pbs 2.x combined with pve 6.x? > > It seems to work here? > > MJ > > On 19/08/2021 17:58, Fabrizio Cuseo wrote: >> Hello. >> Someone is using proxmox backup server 2.X with the last PVE 6.4 ? >> If yes, someone have installed on PVE 6.4, backup client 2.X using repository: >> "deb http://download.proxmox.com/debian/pbs-client buster main " ? >> >> I would like to have LVM file restore, but now i don't want to upgrade my >> cluster. >> >> Regards, Fabrizio >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From martin at proxmox.com Mon Sep 6 17:59:05 2021 From: martin at proxmox.com (Martin Maurer) Date: Mon, 6 Sep 2021 17:59:05 +0200 Subject: [PVE-User] PBS 2.0 with pve 6.4 In-Reply-To: <1308846921.231749.1630939252104.JavaMail.zimbra@zimbra.panservice.it> References: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> <1308846921.231749.1630939252104.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <563ce070-c82e-f912-020e-f47e5fe5f1d1@proxmox.com> On 06.09.21 16:40, Fabrizio Cuseo wrote: > I need to have "file restore" with LVM formatted drive, so is not working with PBS 1.X > I have upgraded to 2.0 PBS, but using it with PVE 6.4.X, i can't use file restore with LVM support because has been implemented on 2.x PBS (both server and client). This is a feature from Proxmox VE 7.0, see https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0 (no plan to backport this). Should work with PBS 1.x and 2.x. -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com ____________________________________________________________________ Proxmox Server Solutions GmbH Br?uhausgasse 37, 1050 Vienna, Austria Commercial register no.: FN 258879 f Registration office: Handelsgericht Wien From lists at merit.unu.edu Mon Sep 6 18:40:48 2021 From: lists at merit.unu.edu (mj) Date: Mon, 6 Sep 2021 18:40:48 +0200 Subject: [PVE-User] PBS 2.0 with pve 6.4 In-Reply-To: <563ce070-c82e-f912-020e-f47e5fe5f1d1@proxmox.com> References: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> <1308846921.231749.1630939252104.JavaMail.zimbra@zimbra.panservice.it> <563ce070-c82e-f912-020e-f47e5fe5f1d1@proxmox.com> Message-ID: <5cd4f96d-a952-f753-fc45-62023ee58642@merit.unu.edu> Hi both, Thanks for confirming! MJ On 06/09/2021 17:59, Martin Maurer wrote: > On 06.09.21 16:40, Fabrizio Cuseo wrote: >> I need to have "file restore" with LVM formatted drive, so is not >> working with PBS 1.X >> I have upgraded to 2.0 PBS, but using it with PVE 6.4.X, i can't use >> file restore with LVM support because has been implemented on 2.x PBS >> (both server and client). > > This is a feature from Proxmox VE 7.0, see > https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0 (no plan to backport > this). > > Should work with PBS 1.x and 2.x. > > From f.cuseo at panservice.it Mon Sep 6 20:09:10 2021 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 6 Sep 2021 20:09:10 +0200 (CEST) Subject: [PVE-User] PBS 2.0 with pve 6.4 In-Reply-To: <5cd4f96d-a952-f753-fc45-62023ee58642@merit.unu.edu> References: <834518548.61504.1629388709014.JavaMail.zimbra@zimbra.panservice.it> <1308846921.231749.1630939252104.JavaMail.zimbra@zimbra.panservice.it> <563ce070-c82e-f912-020e-f47e5fe5f1d1@proxmox.com> <5cd4f96d-a952-f753-fc45-62023ee58642@merit.unu.edu> Message-ID: <919939004.237483.1630951750050.JavaMail.zimbra@zimbra.panservice.it> So, i will install a PVE 7.0 as a VM on my 6.4 PVE cluster, setup the same PBS, and "file restore" with that :) ----- Il 6-set-21, alle 18:40, mj lists at merit.unu.edu ha scritto: > Hi both, > > Thanks for confirming! > > MJ > > On 06/09/2021 17:59, Martin Maurer wrote: >> On 06.09.21 16:40, Fabrizio Cuseo wrote: >>> I need to have "file restore" with LVM formatted drive, so is not >>> working with PBS 1.X >>> I have upgraded to 2.0 PBS, but using it with PVE 6.4.X, i can't use >>> file restore with LVM support because has been implemented on 2.x PBS >>> (both server and client). >> >> This is a feature from Proxmox VE 7.0, see >> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0 (no plan to backport >> this). >> >> Should work with PBS 1.x and 2.x. >> >> > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From martin.konold at konsec.com Tue Sep 7 14:00:16 2021 From: martin.konold at konsec.com (Konold, Martin) Date: Tue, 07 Sep 2021 14:00:16 +0200 Subject: [PVE-User] Backup to PBS fails Message-ID: <0512086cc7b65670681424f95773841a@konsec.com> Hi there, I have a test setup investigating the migration from oVirt to Proxmox which used to work nicely but now only creates single byte length entries on PBS datastores. Firstly backup to a cephfs storage works as expected: root at hbase01 ~ # vzdump 100 --mode snapshot --node hbase01 --compress zstd --all 0 --storage cephfs01 --mailnotification always INFO: starting new backup job: vzdump 100 --mode snapshot --mailnotification always --node hbase01 --storage cephfs01 --all 0 --compress zstd INFO: Starting Backup of VM 100 (qemu) INFO: Backup started at 2021-09-07 13:38:10 INFO: status = stopped INFO: backup mode: stop INFO: ionice priority: 7 INFO: VM Name: CentOS-8-100.h.konsec.com INFO: include disk 'scsi0' 'pool01:base-100-disk-0' 10G INFO: creating vzdump archive '/mnt/pve/cephfs01/dump/vzdump-qemu-100-2021_09_07-13_38_10.vma.zst' INFO: starting template backup INFO: /usr/bin/vma create -v -c /mnt/pve/cephfs01/dump/vzdump-qemu-100-2021_09_07-13_38_10.tmp/qemu-server.conf exec:zstd --rsyncable --threads=1 > /mnt/pve/cephfs01/dump/vzdump-qemu-100-2021_09_07-13_38_10.vma.dat drive-scsi0=/dev/rbd/pool01/base-100-disk-0 INFO: progress 0% 0/10737418240 0 INFO: progress 1% 107413504/10737418240 64716800 .... Backup to a Proxmox Backup Server PBS fails though root at hbase01 ~ # vzdump 100 --mode snapshot --node hbase01 --compress zstd --all 0 --storage PVE-FE-3 --mailnotification always INFO: starting new backup job: vzdump 100 --all 0 --node hbase01 --mailnotification always --storage PVE-FE-3 --mode snapshot --compress zstd INFO: Starting Backup of VM 100 (qemu) INFO: Backup started at 2021-09-07 13:38:31 INFO: status = stopped INFO: backup mode: stop INFO: ionice priority: 7 INFO: VM Name: CentOS-8-100.h.konsec.com INFO: include disk 'scsi0' 'pool01:base-100-disk-0' 10G INFO: creating Proxmox Backup Server archive 'vm/100/2021-09-07T11:38:31Z' INFO: starting kvm to execute backup task Use of uninitialized value in split at /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 100. /dev/rbd45 generating cloud-init ISO ERROR: VM 100 qmp command 'backup' failed - backup connect failed: command error: EACCES: Permission denied INFO: aborting backup job INFO: stopping kvm after backup task trying to acquire lock... OK ERROR: Backup of VM 100 failed - VM 100 qmp command 'backup' failed - backup connect failed: command error: EACCES: Permission denied INFO: Failed at 2021-09-07 13:38:33 INFO: Backup job finished with errors job errors The target storage exists and is known to PVE. root at hbase01 ~ # pvesm status Name Type Status Total Used Available % PVE-FE pbs active 59826418816 1498314752 58328104064 2.50% PVE-FE-2 pbs active 58345629568 17525504 58328104064 0.03% PVE-FE-3 pbs active 58345629568 17525504 58328104064 0.03% cephfs01 cephfs active 1128591360 39051264 1089540096 3.46% local dir active 15718400 10359288 5359112 65.91% pool01 rbd active 1485625252 396081828 1089543424 26.66% pool12 rbd active 4253993533 2619678397 1634315136 61.58% Is this a problem introduced with an update of PBS? I tried adding backup users to PBS and using API-tokens to no avail. Both PVE and PBS are most uptodate (no-subscription) -- Regards ppa. Martin Konold -- Martin Konold - Prokurist, CTO KONSEC GmbH -? make things real Amtsgericht Stuttgart, HRB 23690 Gesch?ftsf?hrer: Andreas Mack Im K?ller 3, 70794 Filderstadt, Germany From dietmar at proxmox.com Tue Sep 7 17:22:03 2021 From: dietmar at proxmox.com (Dietmar Maurer) Date: Tue, 7 Sep 2021 17:22:03 +0200 (CEST) Subject: [PVE-User] Backup to PBS fails Message-ID: <1813553277.3101.1631028123799@webmail.proxmox.com> > > Backup to a Proxmox Backup Server PBS fails though > root at hbase01 ~ # vzdump 100 --mode snapshot --node hbase01 --compress > zstd --all 0 --storage PVE-FE-3 --mailnotification always > INFO: starting new backup job: vzdump 100 --all 0 --node hbase01 > --mailnotification always --storage PVE-FE-3 --mode snapshot --compress > zstd > INFO: Starting Backup of VM 100 (qemu) > INFO: Backup started at 2021-09-07 13:38:31 > INFO: status = stopped > INFO: backup mode: stop > INFO: ionice priority: 7 > INFO: VM Name: CentOS-8-100.h.konsec.com > INFO: include disk 'scsi0' 'pool01:base-100-disk-0' 10G > INFO: creating Proxmox Backup Server archive > 'vm/100/2021-09-07T11:38:31Z' > INFO: starting kvm to execute backup task > Use of uninitialized value in split at > /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 100. > /dev/rbd45 > generating cloud-init ISO > ERROR: VM 100 qmp command 'backup' failed - backup connect failed: > command error: EACCES: Permission denied This error is not related to backup. Seems the VM is stopped and does not have a correctly initialize cloud-init ISO. Please can you try if it is possible to start this VM at all? From martin.konold at konsec.com Tue Sep 7 17:48:10 2021 From: martin.konold at konsec.com (Konold, Martin) Date: Tue, 07 Sep 2021 17:48:10 +0200 Subject: [PVE-User] Backup to PBS fails In-Reply-To: <1813553277.3101.1631028123799@webmail.proxmox.com> References: <1813553277.3101.1631028123799@webmail.proxmox.com> Message-ID: <2b9f52da0e78f625fe548a33ce89a93c@konsec.com> Hi Dietmar, vm100 is a template. vm102 is up and running but shows the same error. root at hbase01 ~ # vzdump 102 --mode snapshot --node hbase01 --compress zstd --all 0 --storage PVE-FE --mailnotification always INFO: starting new backup job: vzdump 102 --mode snapshot --compress zstd --storage PVE-FE --mailnotification always --all 0 --node hbase01 INFO: Starting Backup of VM 102 (qemu) INFO: Backup started at 2021-09-07 17:47:03 INFO: status = running INFO: VM Name: rtnl-102.h.konsec.com INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: skip unused drive 'pool01:vm-102-disk-1' (not included into backup) INFO: creating Proxmox Backup Server archive 'vm/102/2021-09-07T15:47:03Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption INFO: issuing guest-agent 'fs-thaw' command ERROR: VM 102 qmp command 'backup' failed - backup connect failed: command error: EACCES: Permission denied INFO: aborting backup job INFO: resuming VM again ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup connect failed: command error: EACCES: Permission denied INFO: Failed at 2021-09-07 17:47:04 INFO: Backup job finished with errors job errors --- Regards ppa. Martin Konold -- Martin Konold - Prokurist, CTO KONSEC GmbH -? make things real Amtsgericht Stuttgart, HRB 23690 Gesch?ftsf?hrer: Andreas Mack Im K?ller 3, 70794 Filderstadt, Germany Am 2021-09-07 17:22, schrieb Dietmar Maurer: >> > Backup to a Proxmox Backup Server PBS fails though >> root at hbase01 ~ # vzdump 100 --mode snapshot --node hbase01 --compress >> zstd --all 0 --storage PVE-FE-3 --mailnotification always >> INFO: starting new backup job: vzdump 100 --all 0 --node hbase01 >> --mailnotification always --storage PVE-FE-3 --mode snapshot >> --compress >> zstd >> INFO: Starting Backup of VM 100 (qemu) >> INFO: Backup started at 2021-09-07 13:38:31 >> INFO: status = stopped >> INFO: backup mode: stop >> INFO: ionice priority: 7 >> INFO: VM Name: CentOS-8-100.h.konsec.com >> INFO: include disk 'scsi0' 'pool01:base-100-disk-0' 10G >> INFO: creating Proxmox Backup Server archive >> 'vm/100/2021-09-07T11:38:31Z' >> INFO: starting kvm to execute backup task >> Use of uninitialized value in split at >> /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 100. >> /dev/rbd45 >> generating cloud-init ISO >> ERROR: VM 100 qmp command 'backup' failed - backup connect failed: >> command error: EACCES: Permission denied > > This error is not related to backup. Seems the VM is stopped and does > not have a correctly initialize cloud-init ISO. > > Please can you try if it is possible to start this VM at all? From rightkicktech at gmail.com Wed Sep 8 10:05:00 2021 From: rightkicktech at gmail.com (Alex K) Date: Wed, 8 Sep 2021 11:05:00 +0300 Subject: [PVE-User] VM storage and replication Message-ID: Hi all, I have setup a dual server setup, with latest proxmox v7.x. Each host with its own local storage. No shared storage (CEPH, GlusterFS). I understand I can have VMs hosted on top LVM with qcow2 disk images or on top thin LVM as raw thin LVM volumes. On both cases I still keep the option to be able to perform VM backups. Which one is the preferred way according to your experience? I will try to do some quick tests on the IO performance between the two. Also, I was thinking to replicate the VMs from one host to the other. I understand that for the Proxmox integrated replication feature I need ZFS backed storage. As I am not much into ZFS, although I really enjoy FreeNAS and its great features and will definitely look into it later, I was thinking to prepare a custom script that would snapshot the LVM volumes where the VM images reside and sync the VM disks from one host to the other, using rsync, just for a local copy of them. Of course I will take care to have an external media also to periodically export the VMs for backup purposes, though I would like to have a local copy of the VM disk images at the other host, readily available in case I face issues with the external media or one of the hosts. What do you think about this approach? Am I missing some other feature or better approach? Regrading the sync/replication of the VMs between the hosts (without ZFS), I was thinking also to have a dedicated local LVM volume for these periodic backup jobs configured within Proxmox and then the custom script to just rsync these backup images between the two hosts. This seems a simple one though it increases the storage requirements, while with the previous approach with the custom script, the script would snapshot, sync to the other side and remove the snapshot without keeping a redundant local copy of the disk image in the same host. Sorry for the long read. Appreciate any feedback. Alex From nada at verdnatura.es Wed Sep 8 11:25:58 2021 From: nada at verdnatura.es (nada) Date: Wed, 08 Sep 2021 11:25:58 +0200 Subject: [PVE-User] VM storage and replication In-Reply-To: References: Message-ID: <99b82feffbce274c20bfefb18e41efd8@verdnatura.es> hi Alex in case you have a limited budget now you may do backups local backups but it is not recommended by some cheap NAS ASAP temporal solution: 1. create some filesystem LVM/ext4 @node1 and mount at /mnt/backup 2. install NFS server @node1 and export /mnt/backup to node2 3. install NFS client @node2 and create mountpoint /mnt/backup 4. edit fstab @node2 example node1:/mnt/backup /mnt/backup nfs defaults,noatime,bg 0 2 5. mount /mnt/backup 6. add directory to your proxmox storage via webGUI or add to /etc/pve/storage.cfg example dir: backup path /mnt/backup content vztmpl,backup,iso after this you will have backups accessible at both nodes hope it will help you Nada On 2021-09-08 10:05, Alex K wrote: > Hi all, > > I have setup a dual server setup, with latest proxmox v7.x. Each host > with > its own local storage. No shared storage (CEPH, GlusterFS). > > I understand I can have VMs hosted on top LVM with qcow2 disk images or > on > top thin LVM as raw thin LVM volumes. On both cases I still keep the > option > to be able to perform VM backups. Which one is the preferred way > according > to your experience? I will try to do some quick tests on the IO > performance > between the two. > > Also, I was thinking to replicate the VMs from one host to the other. I > understand that for the Proxmox integrated replication feature I need > ZFS > backed storage. As I am not much into ZFS, although I really enjoy > FreeNAS > and its great features and will definitely look into it later, I was > thinking to prepare a custom script that would snapshot the LVM volumes > where the VM images reside and sync the VM disks from one host to the > other, using rsync, just for a local copy of them. Of course I will > take > care to have an external media also to periodically export the VMs for > backup purposes, though I would like to have a local copy of the VM > disk > images at the other host, readily available in case I face issues with > the > external media or one of the hosts. What do you think about this > approach? > Am I missing some other feature or better approach? > > Regrading the sync/replication of the VMs between the hosts (without > ZFS), > I was thinking also to have a dedicated local LVM volume for these > periodic > backup jobs configured within Proxmox and then the custom script to > just > rsync these backup images between the two hosts. This seems a simple > one > though it increases the storage requirements, while with the previous > approach with the custom script, the script would snapshot, sync to the > other side and remove the snapshot without keeping a redundant local > copy > of the disk image in the same host. > > Sorry for the long read. > Appreciate any feedback. > > Alex > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From rightkicktech at gmail.com Wed Sep 8 11:38:19 2021 From: rightkicktech at gmail.com (Alex K) Date: Wed, 8 Sep 2021 12:38:19 +0300 Subject: [PVE-User] VM storage and replication In-Reply-To: <99b82feffbce274c20bfefb18e41efd8@verdnatura.es> References: <99b82feffbce274c20bfefb18e41efd8@verdnatura.es> Message-ID: On Wed, Sep 8, 2021 at 12:32 PM nada wrote: > hi Alex > in case you have a limited budget now > you may do backups local backups but it is not recommended by some cheap > NAS ASAP > > temporal solution: > 1. create some filesystem LVM/ext4 @node1 and mount at /mnt/backup > 2. install NFS server @node1 and export /mnt/backup to node2 > 3. install NFS client @node2 and create mountpoint /mnt/backup > 4. edit fstab @node2 example > node1:/mnt/backup /mnt/backup nfs > defaults,noatime,bg 0 2 > 5. mount /mnt/backup > 6. add directory to your proxmox storage via webGUI > or add to /etc/pve/storage.cfg example > > dir: backup > path /mnt/backup > content vztmpl,backup,iso > > after this you will have backups accessible at both nodes > hope it will help you > Hi Nada, Thanx for your feedback. I will definitely include a NAS into the final setup for exporting the backups. I was just wondering about how I would sync local backups between the nodes for an additional local copy and your suggestion seems a nice work-around to me without adding any other custom scripts. Nada > > On 2021-09-08 10:05, Alex K wrote: > > Hi all, > > > > I have setup a dual server setup, with latest proxmox v7.x. Each host > > with > > its own local storage. No shared storage (CEPH, GlusterFS). > > > > I understand I can have VMs hosted on top LVM with qcow2 disk images or > > on > > top thin LVM as raw thin LVM volumes. On both cases I still keep the > > option > > to be able to perform VM backups. Which one is the preferred way > > according > > to your experience? I will try to do some quick tests on the IO > > performance > > between the two. > > > > Also, I was thinking to replicate the VMs from one host to the other. I > > understand that for the Proxmox integrated replication feature I need > > ZFS > > backed storage. As I am not much into ZFS, although I really enjoy > > FreeNAS > > and its great features and will definitely look into it later, I was > > thinking to prepare a custom script that would snapshot the LVM volumes > > where the VM images reside and sync the VM disks from one host to the > > other, using rsync, just for a local copy of them. Of course I will > > take > > care to have an external media also to periodically export the VMs for > > backup purposes, though I would like to have a local copy of the VM > > disk > > images at the other host, readily available in case I face issues with > > the > > external media or one of the hosts. What do you think about this > > approach? > > Am I missing some other feature or better approach? > > > > Regrading the sync/replication of the VMs between the hosts (without > > ZFS), > > I was thinking also to have a dedicated local LVM volume for these > > periodic > > backup jobs configured within Proxmox and then the custom script to > > just > > rsync these backup images between the two hosts. This seems a simple > > one > > though it increases the storage requirements, while with the previous > > approach with the custom script, the script would snapshot, sync to the > > other side and remove the snapshot without keeping a redundant local > > copy > > of the disk image in the same host. > > > > Sorry for the long read. > > Appreciate any feedback. > > > > Alex > > _______________________________________________ > > pve-user mailing list > > pve-user at lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From martin.konold at konsec.com Wed Sep 8 12:09:24 2021 From: martin.konold at konsec.com (Konold, Martin) Date: Wed, 08 Sep 2021 12:09:24 +0200 Subject: [PVE-User] VM storage and replication In-Reply-To: <99b82feffbce274c20bfefb18e41efd8@verdnatura.es> References: <99b82feffbce274c20bfefb18e41efd8@verdnatura.es> Message-ID: <467c4ba60fe8307ae8c32023e2a22a5f@konsec.com> Hi, while I highly recommend PBS I made good experience with using CIFS instead of NFS for performing backups to a central NAS. --- Regards ppa. Martin Konold -- Martin Konold - Prokurist, CTO KONSEC GmbH -? make things real Amtsgericht Stuttgart, HRB 23690 Gesch?ftsf?hrer: Andreas Mack Im K?ller 3, 70794 Filderstadt, Germany Am 2021-09-08 11:25, schrieb nada: > hi Alex > in case you have a limited budget now > you may do backups local backups but it is not recommended by some > cheap NAS ASAP > > temporal solution: > 1. create some filesystem LVM/ext4 @node1 and mount at /mnt/backup > 2. install NFS server @node1 and export /mnt/backup to node2 > 3. install NFS client @node2 and create mountpoint /mnt/backup > 4. edit fstab @node2 example > node1:/mnt/backup /mnt/backup nfs > defaults,noatime,bg 0 2 > 5. mount /mnt/backup > 6. add directory to your proxmox storage via webGUI > or add to /etc/pve/storage.cfg example > > dir: backup > path /mnt/backup > content vztmpl,backup,iso > > after this you will have backups accessible at both nodes > hope it will help you > Nada > > > On 2021-09-08 10:05, Alex K wrote: >> Hi all, >> >> I have setup a dual server setup, with latest proxmox v7.x. Each host >> with >> its own local storage. No shared storage (CEPH, GlusterFS). >> >> I understand I can have VMs hosted on top LVM with qcow2 disk images >> or on >> top thin LVM as raw thin LVM volumes. On both cases I still keep the >> option >> to be able to perform VM backups. Which one is the preferred way >> according >> to your experience? I will try to do some quick tests on the IO >> performance >> between the two. >> >> Also, I was thinking to replicate the VMs from one host to the other. >> I >> understand that for the Proxmox integrated replication feature I need >> ZFS >> backed storage. As I am not much into ZFS, although I really enjoy >> FreeNAS >> and its great features and will definitely look into it later, I was >> thinking to prepare a custom script that would snapshot the LVM >> volumes >> where the VM images reside and sync the VM disks from one host to the >> other, using rsync, just for a local copy of them. Of course I will >> take >> care to have an external media also to periodically export the VMs for >> backup purposes, though I would like to have a local copy of the VM >> disk >> images at the other host, readily available in case I face issues with >> the >> external media or one of the hosts. What do you think about this >> approach? >> Am I missing some other feature or better approach? >> >> Regrading the sync/replication of the VMs between the hosts (without >> ZFS), >> I was thinking also to have a dedicated local LVM volume for these >> periodic >> backup jobs configured within Proxmox and then the custom script to >> just >> rsync these backup images between the two hosts. This seems a simple >> one >> though it increases the storage requirements, while with the previous >> approach with the custom script, the script would snapshot, sync to >> the >> other side and remove the snapshot without keeping a redundant local >> copy >> of the disk image in the same host. >> >> Sorry for the long read. >> Appreciate any feedback. >> >> Alex >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leandro at tecnetmza.com.ar Wed Sep 8 14:46:32 2021 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Wed, 8 Sep 2021 09:46:32 -0300 Subject: [PVE-User] ceph Message-ID: Hi guys , I have a 2 nodes cluster working, I will add a third node to the cluster. I would like to know the goods that a ceph storage can bring to my existing cluster. What is an easy / recommended way to implement it ? Wich hardware should I consider to use ? ############# Currently im facing the upgrade from pve 6 to pve 7. Having a ceph storage can make this process easier ? Regards. Leandro., From gilberto.nunes32 at gmail.com Wed Sep 8 14:55:15 2021 From: gilberto.nunes32 at gmail.com (Gilberto Ferreira) Date: Wed, 8 Sep 2021 09:55:15 -0300 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: >> Wich hardware should I consider to use ? At least SSD Enterprise/DataCenter Class 10G network cards. --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em qua., 8 de set. de 2021 ?s 09:47, Leandro Roggerone < leandro at tecnetmza.com.ar> escreveu: > Hi guys , I have a 2 nodes cluster working, > I will add a third node to the cluster. > I would like to know the goods that a ceph storage can bring to my existing > cluster. > What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? > > ############# > > Currently im facing the upgrade from pve 6 to pve 7. > Having a ceph storage can make this process easier ? > > Regards. > Leandro., > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From gilberto.nunes32 at gmail.com Wed Sep 8 14:55:15 2021 From: gilberto.nunes32 at gmail.com (Gilberto Ferreira) Date: Wed, 8 Sep 2021 09:55:15 -0300 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: >> Wich hardware should I consider to use ? At least SSD Enterprise/DataCenter Class 10G network cards. --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em qua., 8 de set. de 2021 ?s 09:47, Leandro Roggerone < leandro at tecnetmza.com.ar> escreveu: > Hi guys , I have a 2 nodes cluster working, > I will add a third node to the cluster. > I would like to know the goods that a ceph storage can bring to my existing > cluster. > What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? > > ############# > > Currently im facing the upgrade from pve 6 to pve 7. > Having a ceph storage can make this process easier ? > > Regards. > Leandro., > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From elacunza at binovo.es Wed Sep 8 15:15:28 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 8 Sep 2021 15:15:28 +0200 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: Hi Leandro, What are you using this cluster for? What local disks in the servers? RAID controller with cache? El 8/9/21 a las 14:46, Leandro Roggerone escribi?: > Hi guys , I have a 2 nodes cluster working, > I will add a third node to the cluster. > I would like to know the goods that a ceph storage can bring to my existing > cluster. > What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? > > ############# > > Currently im facing the upgrade from pve 6 to pve 7. > Having a ceph storage can make this process easier ? > > Regards. > Leandro., > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From cgmontana at gmail.com Wed Sep 8 20:17:52 2021 From: cgmontana at gmail.com (=?UTF-8?Q?MONTA=C3=91A_Gonzalo?=) Date: Wed, 8 Sep 2021 15:17:52 -0300 Subject: [PVE-User] PVE 7 Documentation- admin guide Message-ID: Good day all. Hope you are fine and healthy. Is it correct to assume that there?s an error on manual pve-admin-guide-7-1? Shouldn?t be vmbr1 instead of vmbr0 with ip address 192.168.10.3 in figure on page 31/477 of pve-admin-guide-7-1. User list policy prevents from attached the corresponding screen capture image Thank you. Best regards. Carlos From leesteken at protonmail.ch Wed Sep 8 20:27:39 2021 From: leesteken at protonmail.ch (Arjen) Date: Wed, 08 Sep 2021 18:27:39 +0000 Subject: [PVE-User] PVE 7 Documentation- admin guide In-Reply-To: References: Message-ID: On Wednesday, September 8th, 2021 at 20:17, MONTA?A Gonzalo wrote: > Good day all. > > Hope you are fine and healthy. > > Is it correct to assume that there?s an error on manual pve-admin-guide-7-1? > > Shouldn?t be vmbr1 instead of vmbr0 with ip address 192.168.10.3 in > figure on page 31/477 of pve-admin-guide-7-1. The virtual bridges are on two different nodes, and therefore can have the same name vmbr0. Or both could be vmbr1 or both vmbr2. The default name of the first is always vmbr0. There is no mention of a vmbr1 anywhere, so why do you think it should have the name vmbr1? best regards, Arjen From rightkicktech at gmail.com Wed Sep 8 22:07:39 2021 From: rightkicktech at gmail.com (Alex K) Date: Wed, 8 Sep 2021 23:07:39 +0300 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: On Wed, Sep 8, 2021, 15:46 Leandro Roggerone wrote: > Hi guys , I have a 2 nodes cluster working, > I will add a third node to the cluster. > I would like to know the goods that a ceph storage can bring to my existing > cluster. > It brings high availability and Iive migration as any cluster aware shared storage with the cost of having to take care ceph. One advantage also is that you hopefully will not have downtime during maintenance. What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? > > ############# > > Currently im facing the upgrade from pve 6 to pve 7. > Having a ceph storage can make this process easier ? > > Regards. > Leandro., > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From rightkicktech at gmail.com Wed Sep 8 22:07:39 2021 From: rightkicktech at gmail.com (Alex K) Date: Wed, 8 Sep 2021 23:07:39 +0300 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: On Wed, Sep 8, 2021, 15:46 Leandro Roggerone wrote: > Hi guys , I have a 2 nodes cluster working, > I will add a third node to the cluster. > I would like to know the goods that a ceph storage can bring to my existing > cluster. > It brings high availability and Iive migration as any cluster aware shared storage with the cost of having to take care ceph. One advantage also is that you hopefully will not have downtime during maintenance. What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? > > ############# > > Currently im facing the upgrade from pve 6 to pve 7. > Having a ceph storage can make this process easier ? > > Regards. > Leandro., > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From lists at benappy.com Thu Sep 9 00:11:05 2021 From: lists at benappy.com (ic) Date: Thu, 9 Sep 2021 00:11:05 +0200 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> Hi there, > On 8 Sep 2021, at 14:46, Leandro Roggerone wrote: > > I would like to know the goods that a ceph storage can bring to my existing > cluster. > What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? First, HW. Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G ports) and two Intel X520-DA2 per server. Hook up each port of the Intel cards to each of the Nexuses, getting a full redundancy between network cards and switches. Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as a simple L2 trunk (can provide more details as why if needed). Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you loose one card or one switch, you still have 10 Gbps for each. The benefits? With default configuration, your data lives in 3 places. Also, scale out. You know the expensive stuff, hyperconverged servers (nutanix and such) ? You get that with this. The performance is wild, just moved my customers from a proxmox cluster backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying. Keep your old storage infrastructure, whatever that is, for backups with PBS. YMMV Regards, ic From lists at benappy.com Thu Sep 9 00:11:05 2021 From: lists at benappy.com (ic) Date: Thu, 9 Sep 2021 00:11:05 +0200 Subject: [PVE-User] ceph In-Reply-To: References: Message-ID: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> Hi there, > On 8 Sep 2021, at 14:46, Leandro Roggerone wrote: > > I would like to know the goods that a ceph storage can bring to my existing > cluster. > What is an easy / recommended way to implement it ? > Wich hardware should I consider to use ? First, HW. Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G ports) and two Intel X520-DA2 per server. Hook up each port of the Intel cards to each of the Nexuses, getting a full redundancy between network cards and switches. Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as a simple L2 trunk (can provide more details as why if needed). Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you loose one card or one switch, you still have 10 Gbps for each. The benefits? With default configuration, your data lives in 3 places. Also, scale out. You know the expensive stuff, hyperconverged servers (nutanix and such) ? You get that with this. The performance is wild, just moved my customers from a proxmox cluster backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying. Keep your old storage infrastructure, whatever that is, for backups with PBS. YMMV Regards, ic From lists at benappy.com Fri Sep 10 21:36:50 2021 From: lists at benappy.com (ic) Date: Fri, 10 Sep 2021 21:36:50 +0200 Subject: [PVE-User] export ceph volume Message-ID: <3C201B4E-301D-4E5F-BB53-052E6A233B42@benappy.com> Hey guys, Is there an easy (if any at all) way to export a ceph volume through NFS? I?m migrating virtual machines from a non-ceph cluster to a ceph one and for many reasons, going through a 3rd party external storage is not an option so I?m looking for a way to mount the ceph from the new cluster on the old one, move the VM disks there and then stop them on the old cluster and start them on the new one (I?m copying the .conf files by hand). Regards, ic From sysadmin at tashicell.com Sat Sep 11 08:08:54 2021 From: sysadmin at tashicell.com (System Administrator) Date: Sat, 11 Sep 2021 12:08:54 +0600 Subject: Nextcloud scaling on proxmox Message-ID: <1631340534183061095@tashicell.com> ?Hi all, I have setup up Proxmox cluster with three nodes and with Ceph storage. Configured one nextcloud LXC on node1. Now I want to make multiple same nextcloud LXC on node2 and node3 for load balancing so the user can hit any one of these three instances using DNS round-robin record. I would be grateful if any of you can help me go forward specifically on storage part. Looking forward for your response. Regards, Sonam Namgyel TashiCell ****************************************************************************************************************************** Information contained in this message maybe confidential in nature and is meant for the intended recipient(s) of the message only. Tashi InfoComm Limited has the sole right to such information and any copying/redistribution of the information contained in the message, without the prior written consent of Tashi InfoComm Limited, is Prohibited. ****************************************************************************************************************************** From alwin at antreich.com Sun Sep 12 11:43:00 2021 From: alwin at antreich.com (Alwin Antreich) Date: Sun, 12 Sep 2021 09:43:00 +0000 Subject: [PVE-User] export ceph volume In-Reply-To: <3C201B4E-301D-4E5F-BB53-052E6A233B42@benappy.com> References: <3C201B4E-301D-4E5F-BB53-052E6A233B42@benappy.com> Message-ID: <879d8982e92092a0fe926ed0135d0f31@antreich.com> Hi, September 10, 2021 9:36 PM, "ic" wrote: > Hey guys, > > Is there an easy (if any at all) way to export a ceph volume through NFS? No, there isn't. > I?m migrating virtual machines from a non-ceph cluster to a ceph one and for many reasons, going > through a 3rd party external storage is not an option so I?m looking for a way to mount the ceph > from the new cluster on the old one, move the VM disks there and then stop them on the old cluster > and start them on the new one (I?m copying the .conf files by hand). There are a couple of ways to migrate to RBD-based storage with different downtime. 1) connect the hypervisor directly to Ceph and move the VM's disk(s), if the hypervisor supports it 2) create a CephFS and use nfs-ganesha to export the FS, if not feasable then use a VM as NFS server 3) use qemu-img to convert the VM's disk(s) directly to RBD, if no direct connection possible, then using wdt, nc, ssh over the network I hope these give you some idea on how to proceed. Cheers, Alwin From elacunza at binovo.es Mon Sep 13 09:57:41 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 13 Sep 2021 09:57:41 +0200 Subject: [PVE-User] export ceph volume In-Reply-To: <3C201B4E-301D-4E5F-BB53-052E6A233B42@benappy.com> References: <3C201B4E-301D-4E5F-BB53-052E6A233B42@benappy.com> Message-ID: <598f19f9-27c5-be18-ea78-3728cb2ee22b@binovo.es> Hi, El 10/9/21 a las 21:36, ic escribi?: > Hey guys, > > Is there an easy (if any at all) way to export a ceph volume through NFS? > > I?m migrating virtual machines from a non-ceph cluster to a ceph one and for many reasons, going through a 3rd party external storage is not an option so I?m looking for a way to mount the ceph from the new cluster on the old one, move the VM disks there and then stop them on the old cluster and start them on the new one (I?m copying the .conf files by hand). > If both clusters are Proxmox, then you can configure new cluster's ceph storage as RBD storage in old cluster. Move VM disks online to RBD storage, then stop VM + copy VM .conf to new cluster + start VM in new cluster. If storage name is the same on both clusters, I think you won't have to modifiy VM .conf file. Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From leandro at tecnetmza.com.ar Mon Sep 13 13:32:21 2021 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 13 Sep 2021 08:32:21 -0300 Subject: [PVE-User] ceph In-Reply-To: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> References: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> Message-ID: hi guys , your responses were very useful. Lets suppose I have my 3 nodes running and forming a cluster. Please confirm: a -Can I add the ceph storage at any time ? b- All nodes should be running the same pve version ? c- All nodes should have 1 or more non used storages with no hardware raid to be included in the ceph ? Those storages (c) should be exactly same in capacity , speed , and so ... ? What can goes wrong if dont have 10 but 1 gbps ports ? Regards. Leandro Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El mi?, 8 sept 2021 a las 19:21, ic () escribi?: > Hi there, > > > On 8 Sep 2021, at 14:46, Leandro Roggerone > wrote: > > > > I would like to know the goods that a ceph storage can bring to my > existing > > cluster. > > What is an easy / recommended way to implement it ? > > Wich hardware should I consider to use ? > > First, HW. > > Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G > ports) and two Intel X520-DA2 per server. > > Hook up each port of the Intel cards to each of the Nexuses, getting a > full redundancy between network cards and switches. > > Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as > a simple L2 trunk (can provide more details as why if needed). > > Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you > get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you > loose one card or one switch, you still have 10 Gbps for each. > > The benefits? With default configuration, your data lives in 3 places. > Also, scale out. You know the expensive stuff, hyperconverged servers > (nutanix and such) ? You get that with this. > > The performance is wild, just moved my customers from a proxmox cluster > backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of > AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying. > > Keep your old storage infrastructure, whatever that is, for backups with > PBS. > > YMMV > > Regards, ic > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From leandro at tecnetmza.com.ar Mon Sep 13 13:32:21 2021 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 13 Sep 2021 08:32:21 -0300 Subject: [PVE-User] ceph In-Reply-To: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> References: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> Message-ID: hi guys , your responses were very useful. Lets suppose I have my 3 nodes running and forming a cluster. Please confirm: a -Can I add the ceph storage at any time ? b- All nodes should be running the same pve version ? c- All nodes should have 1 or more non used storages with no hardware raid to be included in the ceph ? Those storages (c) should be exactly same in capacity , speed , and so ... ? What can goes wrong if dont have 10 but 1 gbps ports ? Regards. Leandro Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El mi?, 8 sept 2021 a las 19:21, ic () escribi?: > Hi there, > > > On 8 Sep 2021, at 14:46, Leandro Roggerone > wrote: > > > > I would like to know the goods that a ceph storage can bring to my > existing > > cluster. > > What is an easy / recommended way to implement it ? > > Wich hardware should I consider to use ? > > First, HW. > > Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G > ports) and two Intel X520-DA2 per server. > > Hook up each port of the Intel cards to each of the Nexuses, getting a > full redundancy between network cards and switches. > > Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as > a simple L2 trunk (can provide more details as why if needed). > > Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you > get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you > loose one card or one switch, you still have 10 Gbps for each. > > The benefits? With default configuration, your data lives in 3 places. > Also, scale out. You know the expensive stuff, hyperconverged servers > (nutanix and such) ? You get that with this. > > The performance is wild, just moved my customers from a proxmox cluster > backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of > AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying. > > Keep your old storage infrastructure, whatever that is, for backups with > PBS. > > YMMV > > Regards, ic > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From elacunza at binovo.es Mon Sep 13 13:44:22 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 13 Sep 2021 13:44:22 +0200 Subject: [PVE-User] ceph In-Reply-To: References: <4D290E10-15B5-4816-8613-38438FC1F9CC@benappy.com> Message-ID: <748c22d0-27d2-f825-3ac0-561ec90b89d7@binovo.es> Hi Leandro, El 13/9/21 a las 13:32, Leandro Roggerone escribi?: > hi guys , your responses were very useful. > Lets suppose I have my 3 nodes running and forming a cluster. > Please confirm: > a -Can I add the ceph storage at any time ? Yes > b- All nodes should be running the same pve version ? Generally speaking this is advisable. What versions do you have right now? > c- All nodes should have 1 or more non used storages with no hardware raid > to be included in the ceph ? It is advisable to have OSDs in at least 3 nodes yes (some may say 4 is better). > Those storages (c) should be exactly same in capacity , speed , and so ... > ? Roughly speaking, Ceph will perform as well as the worst disk configured for Ceph. If you plan to use SSD disks, use Enteprise SSD disk, not consumer/client SSDs. > What can goes wrong if dont have 10 but 1 gbps ports ? Latency and overall performance of Ceph storage will be worse/slower. If you plan using 1G, consider setting up separate "cluster" ports for Ceph (1G for VM traffic, 1G for ceph public, 1G for ceph cluster/private) We have clusters with both 10G and 1G (3x1G) networks. All of them work well but 10G network is quite noticeable, specially with SSD disks. Cheers > Regards. > Leandro > > > > Libre > de virus. www.avast.com > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > El mi?, 8 sept 2021 a las 19:21, ic () escribi?: > >> Hi there, >> >>> On 8 Sep 2021, at 14:46, Leandro Roggerone >> wrote: >>> I would like to know the goods that a ceph storage can bring to my >> existing >>> cluster. >>> What is an easy / recommended way to implement it ? >>> Wich hardware should I consider to use ? >> First, HW. >> >> Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G >> ports) and two Intel X520-DA2 per server. >> >> Hook up each port of the Intel cards to each of the Nexuses, getting a >> full redundancy between network cards and switches. >> >> Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as >> a simple L2 trunk (can provide more details as why if needed). >> >> Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you >> get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you >> loose one card or one switch, you still have 10 Gbps for each. >> >> The benefits? With default configuration, your data lives in 3 places. >> Also, scale out. You know the expensive stuff, hyperconverged servers >> (nutanix and such) ? You get that with this. >> >> The performance is wild, just moved my customers from a proxmox cluster >> backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of >> AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying. >> >> Keep your old storage infrastructure, whatever that is, for backups with >> PBS. >> >> YMMV >> >> Regards, ic >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From gaio at sv.lnf.it Wed Sep 15 10:15:08 2021 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 15 Sep 2021 10:15:08 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command Message-ID: <20210915081508.GC3261@sv.lnf.it> We are trying to move some VMs disks from a cluster (PVE 5, storage LVM thin), to a storage of type NFS (a PVE 6 server, debian buster standard NFS server), usin QCOW as destination file/image format. We have correctly move some smaller disks (200GB), but if we try to move a 'big' disk, we got: Sep 14 22:48:18 pveod1 pvedaemon[31552]: starting task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: Sep 14 22:49:18 pveod1 pvedaemon[31552]: end task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command Only this log row appear, no kernel/nfs errors in nfs server or source machine. I've tried to google for this error, or for 'nfs lock timeout' but nothing relevant (to me) appear. Someone have some feedback? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From smr at kmi.com Wed Sep 15 13:14:51 2021 From: smr at kmi.com (Stefan M. Radman) Date: Wed, 15 Sep 2021 11:14:51 +0000 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <20210915081508.GC3261@sv.lnf.it> References: <20210915081508.GC3261@sv.lnf.it> Message-ID: <55102B86-8119-4459-A945-589E921F24B2@kmi.com> Hi Marco The error message says "error with cfs lock?. If you google that you should get a lot of relevant information. Regards Stefan > On Sep 15, 2021, at 10:15, Marco Gaiarin wrote: > > > We are trying to move some VMs disks from a cluster (PVE 5, storage LVM > thin), to a storage of type NFS (a PVE 6 server, debian buster standard > NFS server), usin QCOW as destination file/image format. > > We have correctly move some smaller disks (200GB), but if we try to > move a 'big' disk, we got: > > Sep 14 22:48:18 pveod1 pvedaemon[31552]: starting task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: > Sep 14 22:49:18 pveod1 pvedaemon[31552]: end task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command > > Only this log row appear, no kernel/nfs errors in nfs server or source > machine. > > > I've tried to google for this error, or for 'nfs lock timeout' but > nothing relevant (to me) appear. > > Someone have some feedback? Thanks. > > -- > dott. Marco Gaiarin GNUPG Key ID: 240A3D66 > Associazione ``La Nostra Famiglia'' https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2F&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=ZzGlDuV4tFYaYdLuU6KzzPXza0ONhS5U7nBMr%2F%2BQXqE%3D&reserved=0 > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2Findex.php%2Fit%2Fsostienici%2F5x1000&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2Fe4BkYhgq9aK4a5U8HilkOJRAE6D%2F5mZfRlxd9j5a6g%3D&reserved=0 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-user&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=wHY8A%2BofDG3lNpJa%2FMqJKnNA7HccyWmP96qn%2F41VAaY%3D&reserved=0 > CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From smr at kmi.com Wed Sep 15 13:14:51 2021 From: smr at kmi.com (Stefan M. Radman) Date: Wed, 15 Sep 2021 11:14:51 +0000 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <20210915081508.GC3261@sv.lnf.it> References: <20210915081508.GC3261@sv.lnf.it> Message-ID: <55102B86-8119-4459-A945-589E921F24B2@kmi.com> Hi Marco The error message says "error with cfs lock?. If you google that you should get a lot of relevant information. Regards Stefan > On Sep 15, 2021, at 10:15, Marco Gaiarin wrote: > > > We are trying to move some VMs disks from a cluster (PVE 5, storage LVM > thin), to a storage of type NFS (a PVE 6 server, debian buster standard > NFS server), usin QCOW as destination file/image format. > > We have correctly move some smaller disks (200GB), but if we try to > move a 'big' disk, we got: > > Sep 14 22:48:18 pveod1 pvedaemon[31552]: starting task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: > Sep 14 22:49:18 pveod1 pvedaemon[31552]: end task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command > > Only this log row appear, no kernel/nfs errors in nfs server or source > machine. > > > I've tried to google for this error, or for 'nfs lock timeout' but > nothing relevant (to me) appear. > > Someone have some feedback? Thanks. > > -- > dott. Marco Gaiarin GNUPG Key ID: 240A3D66 > Associazione ``La Nostra Famiglia'' https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2F&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=ZzGlDuV4tFYaYdLuU6KzzPXza0ONhS5U7nBMr%2F%2BQXqE%3D&reserved=0 > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2Findex.php%2Fit%2Fsostienici%2F5x1000&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2Fe4BkYhgq9aK4a5U8HilkOJRAE6D%2F5mZfRlxd9j5a6g%3D&reserved=0 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-user&data=04%7C01%7Csmr%40kmi.com%7Cc1d8083202d946801e8808d978210e25%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637672906422613346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=wHY8A%2BofDG3lNpJa%2FMqJKnNA7HccyWmP96qn%2F41VAaY%3D&reserved=0 > CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From gaio at sv.lnf.it Wed Sep 15 15:12:32 2021 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 15 Sep 2021 15:12:32 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: References: <20210915081508.GC3261@sv.lnf.it> Message-ID: <20210915131232.GN3261@sv.lnf.it> Mandi! Stefan M. Radman via pve-user In chel di` si favelave... > The error message says "error with cfs lock?. > If you google that you should get a lot of relevant information. I've found some hit about CIFS (and i'm using NFS) and about permission trouble (and as just stated, i've moved on the NFS storage successfully a 200GB disk, it is the 2TB one that don't move...). So, seems nothing relevant to me... if you have som direct hit, thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From f.gruenbichler at proxmox.com Wed Sep 15 15:21:56 2021 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 15 Sep 2021 15:21:56 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <20210915081508.GC3261@sv.lnf.it> References: <20210915081508.GC3261@sv.lnf.it> Message-ID: <1631711991.442ebhjlx2.astroid@nora.none> On September 15, 2021 10:15 am, Marco Gaiarin wrote: > > We are trying to move some VMs disks from a cluster (PVE 5, storage LVM > thin), to a storage of type NFS (a PVE 6 server, debian buster standard > NFS server), usin QCOW as destination file/image format. > > We have correctly move some smaller disks (200GB), but if we try to > move a 'big' disk, we got: > > Sep 14 22:48:18 pveod1 pvedaemon[31552]: starting task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: > Sep 14 22:49:18 pveod1 pvedaemon[31552]: end task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command > > Only this log row appear, no kernel/nfs errors in nfs server or source > machine. > > > I've tried to google for this error, or for 'nfs lock timeout' but > nothing relevant (to me) appear. > > Someone have some feedback? Thanks. this is an issue with certain shared-storage operations in PVE - they have to happen under a pmxcfs-lock, which has a hard timeout. if the operation takes too long, the lock will run into the timeout, and the operation fail. there has been some recent development to improve the situation: https://lists.proxmox.com/pipermail/pve-devel/2021-September/049879.html but it hasn't been finalized yet. From Christoph.Weber at xpecto.com Wed Sep 15 16:03:04 2021 From: Christoph.Weber at xpecto.com (Christoph Weber) Date: Wed, 15 Sep 2021 14:03:04 +0000 Subject: [PVE-User] pve-user Digest, Vol 162, Issue 12 In-Reply-To: References: Message-ID: <739ee43f9b4e459aac45c3a7fd88e20a@xpecto.com> > Message: 1 > Date: Wed, 15 Sep 2021 10:15:08 +0200 > From: Marco Gaiarin > To: pve-user at pve.proxmox.com > Subject: [PVE-User] storage migration failed: error with cfs lock > 'storage-nfs-scratch': unable to create image: got lock timeout - > aborting command > Message-ID: <20210915081508.GC3261 at sv.lnf.it> > We have correctly move some smaller disks (200GB), but if we try to move a > 'big' disk, we got: > > Sep 14 22:48:18 pveod1 pvedaemon[31552]: starting > task UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: > Sep 14 22:49:18 pveod1 pvedaemon[31552]: end task > UPID:pveod1:00007BE2:A90E5224:61410A92:qmmove:100:root at pam: > storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to > create image: got lock timeout - aborting command I think this thread might be relevant https://forum.proxmox.com/threads/error-with-cfs-lock-unable-to-create-image-got-lock-timeout-aborting-command.65786/ Quote: >>> we have a hard timeout of 60s for any operation obtaining a cluster lock, which includes volume allocation on shared storages. ... >>> your storage is simply too slow when allocating bigger images it seems. you need to manually allocate them, for example using qemu-img create or convert. From gaio at sv.lnf.it Wed Sep 15 16:47:16 2021 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 15 Sep 2021 16:47:16 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <739ee43f9b4e459aac45c3a7fd88e20a@xpecto.com> <1631711991.442ebhjlx2.astroid@nora.none> Message-ID: <20210915144716.GR3261@sv.lnf.it> Mandi! Fabian Gr?nbichler In chel di` si favelave... > this is an issue with certain shared-storage operations in PVE - they have to > happen under a pmxcfs-lock, which has a hard timeout. if the operation > takes too long, the lock will run into the timeout, and the operation > fail. OK. Good to know. but... Mandi! Christoph Weber In chel di` si favelave... > I think this thread might be relevant > https://forum.proxmox.com/threads/error-with-cfs-lock-unable-to-create-image-got-lock-timeout-aborting-command.65786/ ...seems i have exactly the same trouble, doing some more tests seems that timeout does not happen if i use RAW, but only for QCOW; but in the temporary NFS storage i've not space for the RAW disk... In this link someone say: you can manually create the image (with qemu-img create and then rescan to reference it as unused volume in the configuration but i need to move the disk, not create it... -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From f.gruenbichler at proxmox.com Thu Sep 16 09:26:15 2021 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Thu, 16 Sep 2021 09:26:15 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <20210915144716.GR3261@sv.lnf.it> References: <20210915144716.GR3261@sv.lnf.it> Message-ID: <1631776380.0shrfkj31s.astroid@nora.none> On September 15, 2021 4:47 pm, Marco Gaiarin wrote: > Mandi! Fabian Gr?nbichler > In chel di` si favelave... > >> this is an issue with certain shared-storage operations in PVE - they have to >> happen under a pmxcfs-lock, which has a hard timeout. if the operation >> takes too long, the lock will run into the timeout, and the operation >> fail. > > OK. Good to know. but... > > > Mandi! Christoph Weber > In chel di` si favelave... > >> I think this thread might be relevant >> https://forum.proxmox.com/threads/error-with-cfs-lock-unable-to-create-image-got-lock-timeout-aborting-command.65786/ > > ...seems i have exactly the same trouble, doing some more tests seems > that timeout does not happen if i use RAW, but only for QCOW; but in > the temporary NFS storage i've not space for the RAW disk... the problem (as described in the patch I linked earlier) is that for qcow2, we currently always allocate the metadata for the qcow2 file. if the image file is big enough, and the storage slow enough, this can take too long. for raw there is no metadata (well there is, but it does not scale with the size of the file ;)) the patches allow selecting no pre-allocation for storages where this is an issue - it basically trades off a bit of a performance hit when the image file is filled with data against more/less work when initially creating the image file. > In this link someone say: > > you can manually create the image (with qemu-img create and then rescan to reference it as unused volume in the configuration > > but i need to move the disk, not create it... a manual offline move would also be possible, it boils down to: - create new volume (qemu-img create) - qemu-img convert old volume to new volume - change references in guest config to point to new volume - delete old volume or add it back as unused to the guest config (the latter happens automatically if you do a rescan) a manual online move should only be done if you really understand the machinery involved, but it is also an option in theory. last, you could temporarily switch out the hardcoded 'preallocation=metadata' in /usr/share/perl5/PVE/Storage/Plugin.pm with 'preallocation=off', then reload pveproxy and pvedaemon. running 'apt install --reinstall libpve-storage-perl' reverts to the original code (either after you're done, or if something goes wrong). obviously all of this should be carefully tested with non-production images/guests/systems first, as you are leaving supported/tested territory! From gaio at sv.lnf.it Thu Sep 16 09:46:10 2021 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Thu, 16 Sep 2021 09:46:10 +0200 Subject: [PVE-User] storage migration failed: error with cfs lock 'storage-nfs-scratch': unable to create image: got lock timeout - aborting command In-Reply-To: <1631776380.0shrfkj31s.astroid@nora.none> References: <20210915144716.GR3261@sv.lnf.it> <1631776380.0shrfkj31s.astroid@nora.none> Message-ID: <20210916074610.GB3211@sv.lnf.it> Mandi! Fabian Gr?nbichler In chel di` si favelave... > the problem (as described in the patch I linked earlier) is that for > qcow2, we currently always allocate the metadata for the qcow2 file. if > the image file is big enough, and the storage slow enough, this can take > too long. for raw there is no metadata (well there is, but it does not > scale with the size of the file ;)) Perfectly clear. Thanks. > obviously all of this should be carefully tested with non-production > images/guests/systems first, as you are leaving supported/tested > territory! I've solved this in a more simpler way: i've discovered that a 'thin' filesystem (in my case, ZFS) can allocate a 2TB RAW image in a 800GB ZFS volume, obviously providing that there's no more then 800GB of data in the volume. So, i've simply moved the image as RAW. Thanks! -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From smr at kmi.com Thu Sep 16 12:12:05 2021 From: smr at kmi.com (Stefan M. Radman) Date: Thu, 16 Sep 2021 10:12:05 +0000 Subject: search domain(s) Message-ID: Does PVE support multiple search domains? When I enter multiple domains in the GUI, they all end up in /etc/resolv.conf but only the first one is displayed in the GUI. That seems kind of inconsistent. Stefan CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From mhill at inett.de Thu Sep 16 17:01:31 2021 From: mhill at inett.de (Maximilian Hill) Date: Thu, 16 Sep 2021 17:01:31 +0200 Subject: [PVE-User] VM Failover Message-ID: Hello, how exactly does ha-crm decide, where a VM should end up in case of a failover on poweroff or reboot (shutdown_policy=migrate) ? Does it balance in any way? Kind regards, Maximilian Hil -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gregor at aeppelbroe.de Fri Sep 17 11:29:21 2021 From: gregor at aeppelbroe.de (Gregor Burck) Date: Fri, 17 Sep 2021 11:29:21 +0200 Subject: [PVE-User] proxmox-restore - performance issues Message-ID: <20210917112921.EGroupware.VUEdiWgpWw30Fp0tioEjJWr@heim.aeppelbroe.de> Hi, I've setup an pve/pbs on the same machine: HP DL380 Gen9 E5-2640 v3 @ 2.60GHz (2 x 8 core) 256 GB RAM 2x 1TB SAMSUNG NVME PM983 12x 8 TB HP SAS HDDs I create with HDDs and NVME an zfs Raid10. I still got restore rates of 50 MB/s on one restore job. If I start multiple jobs parallel the single rate is still on this rate, but I see with iotop that the summary rate is even higher (max around 200 MB/s. When I use htop for the CPU utilisation it seems that an single Job run only on one core, even when there are multiple tasks. So I searching the bottle neck, it realy seems not the HDDs. Any idea so long? Thank for every,.. Bye Gregor From smr at kmi.com Sun Sep 19 11:04:24 2021 From: smr at kmi.com (Stefan M. Radman) Date: Sun, 19 Sep 2021 09:04:24 +0000 Subject: Proxmox downloads missing Last-Modified header Message-ID: <60F2D890-1BD3-4868-AC2A-F1780D302EDD@kmi.com> Hi Proxmox team Today I noticed that Proxmox downloads are missing the Last-Modified header. Last-modified header missing -- time-stamps turned off. That makes it impossible to use wget ?timestamping. The modification-date in the Content-Disposition header is apparently not enough to make that work, not even with the --content-disposition option. Any chance you can add the Last-Modified header to downloads? Thanks Stefan smr:Downloads smr$ wget -S --content-disposition --trust-server-names --timestamping 'https://www.proxmox.com/en/downloads?task=callelement&format=raw&item_id=612&element=f85c494b-2b32-4109-b8c1-083cca2b7db6&method=download&args[0]=5c5c8789108e124c05c821fc7e848a5c' --2021-09-19 10:46:32-- https://www.proxmox.com/en/downloads?task=callelement&format=raw&item_id=612&element=f85c494b-2b32-4109-b8c1-083cca2b7db6&method=download&args[0]=5c5c8789108e124c05c821fc7e848a5c Resolving www.proxmox.com (www.proxmox.com)... 2a01:7e0:0:424::12, 212.224.123.69 Connecting to www.proxmox.com (www.proxmox.com)|2a01:7e0:0:424::12|:443... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Sun, 19 Sep 2021 08:46:33 GMT Content-Type: application/x-cd-image Content-Length: 1053097984 Connection: keep-alive Set-Cookie: 2f4d6fdc46bcd694b9e7af987293628a=789tqthas60j85rqf4iqhvq3vh; path=/; secure; HttpOnly Pragma: public Cache-Control: must-revalidate, post-check=0, pre-check=0 Expires: 0 Content-Transfer-Encoding: binary Content-Disposition: attachment; filename="proxmox-ve_7.0-2.iso"; modification-date="Fri, 03 Sep 2021 07:23:20 +0200"; size=1053097984; Strict-Transport-Security: max-age=63072000 X-Content-Type-Options: nosniff Length: 1053097984 (1004M) [application/x-cd-image] Last-modified header missing -- time-stamps turned off. --2021-09-19 10:46:33-- https://www.proxmox.com/en/downloads?task=callelement&format=raw&item_id=612&element=f85c494b-2b32-4109-b8c1-083cca2b7db6&method=download&args[0]=5c5c8789108e124c05c821fc7e848a5c Reusing existing connection to [www.proxmox.com]:443. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Sun, 19 Sep 2021 08:46:33 GMT Content-Type: application/x-cd-image Content-Length: 1053097984 Connection: keep-alive Pragma: public Cache-Control: must-revalidate, post-check=0, pre-check=0 Expires: 0 Content-Transfer-Encoding: binary Content-Disposition: attachment; filename="proxmox-ve_7.0-2.iso"; modification-date="Fri, 03 Sep 2021 07:23:20 +0200"; size=1053097984; Strict-Transport-Security: max-age=63072000 X-Content-Type-Options: nosniff Length: 1053097984 (1004M) [application/x-cd-image] Saving to: ?proxmox-ve_7.0-2.iso? proxmox-ve_7.0-2.is 3%[ ] 30.98M 2.22MB/s eta 7m 19s CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From lists at merit.unu.edu Wed Sep 22 12:08:01 2021 From: lists at merit.unu.edu (mj) Date: Wed, 22 Sep 2021 12:08:01 +0200 Subject: [PVE-User] office365 backup options Message-ID: <056035bc-b8f0-2547-0b7d-e159e2684e20@merit.unu.edu> Hi, We're looking at options for backing up an office365 tenant we are now setting up. Is there any functionality (or plans thereof) related to Office365 backups in Proxmox Backup Solution? A quick google leads to think: perhaps not... But perhaps plans exist? Thanks! MJ From christian.kraus at ckc-it.at Thu Sep 23 23:44:02 2021 From: christian.kraus at ckc-it.at (Christian Kraus) Date: Thu, 23 Sep 2021 21:44:02 +0000 Subject: [PVE-User] office365 backup options In-Reply-To: <056035bc-b8f0-2547-0b7d-e159e2684e20@merit.unu.edu> References: <056035bc-b8f0-2547-0b7d-e159e2684e20@merit.unu.edu> Message-ID: No there is no such implementation - and i guess that this would not be a plan to implement such 3rd party's? but you can use a synology disk station for that with free software for office 365 backup: https://www.synology.com/de-de/dsm/feature/active_backup_office365 -----Urspr?ngliche Nachricht----- Von: mj? Gesendet: Mittwoch 22. September 2021 12:08 An: pve-user at lists.proxmox.com Betreff: [PVE-User] office365 backup options Hi, We're looking at options for backing up an office365 tenant we are now setting up. Is there any functionality (or plans thereof) related to Office365 backups in Proxmox Backup Solution? A quick google leads to think: perhaps not... But perhaps plans exist? Thanks! MJ _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lists at merit.unu.edu Mon Sep 27 15:13:11 2021 From: lists at merit.unu.edu (mj) Date: Mon, 27 Sep 2021 15:13:11 +0200 Subject: [PVE-User] office365 backup options In-Reply-To: References: <056035bc-b8f0-2547-0b7d-e159e2684e20@merit.unu.edu> Message-ID: <339c1c8c-662b-2854-6e2e-ef04ff0a7696@merit.unu.edu> Hi Christian, Thanks for your reply! :-) MJ On 23/09/2021 23:44, Christian Kraus via pve-user wrote: > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From piccardi at truelite.it Mon Sep 27 17:53:14 2021 From: piccardi at truelite.it (Simone Piccardi) Date: Mon, 27 Sep 2021 17:53:14 +0200 Subject: Write PBS backup on removable drives Message-ID: Hi, I'm starting using PBS for Proxmox VM backups, but I'd like to have an option to save a dump in a removable drive, that this days are much cheaper than tapes, for offsite storage. Given that the support to write on a tape is already there, there is any plan to have something similar to write on a removable disk? Regards Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From gilberto.nunes32 at gmail.com Mon Sep 27 18:32:55 2021 From: gilberto.nunes32 at gmail.com (Gilberto Ferreira) Date: Mon, 27 Sep 2021 13:32:55 -0300 Subject: [PVE-User] Write PBS backup on removable drives In-Reply-To: References: Message-ID: Since removable disks are seen by Linux as a regular HDD, everything you need is already there. Just mount the disk into a local directory and create the datastore. --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em seg., 27 de set. de 2021 ?s 13:02, Simone Piccardi via pve-user < pve-user at lists.proxmox.com> escreveu: > > > > ---------- Forwarded message ---------- > From: Simone Piccardi > To: pve-user at lists.proxmox.com > Cc: > Bcc: > Date: Mon, 27 Sep 2021 17:53:14 +0200 > Subject: Write PBS backup on removable drives > Hi, > > I'm starting using PBS for Proxmox VM backups, but I'd like to have an > option to save a dump in a removable drive, that this days are much > cheaper than tapes, for offsite storage. > > Given that the support to write on a tape is already there, there is any > plan to have something similar to write on a removable disk? > > Regards > Simone > -- > Simone Piccardi Truelite Srl > piccardi at truelite.it (email/jabber) Via Monferrato, 6 > Tel. +39-347-1032433 50142 Firenze > http://www.truelite.it Tel. +39-055-7879597 > > > > > ---------- Forwarded message ---------- > From: Simone Piccardi via pve-user > To: pve-user at lists.proxmox.com > Cc: Simone Piccardi > Bcc: > Date: Mon, 27 Sep 2021 17:53:14 +0200 > Subject: [PVE-User] Write PBS backup on removable drives > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From dietmar at proxmox.com Tue Sep 28 06:37:30 2021 From: dietmar at proxmox.com (Dietmar Maurer) Date: Tue, 28 Sep 2021 06:37:30 +0200 (CEST) Subject: [PVE-User] Write PBS backup on removable drives Message-ID: <1793699130.1383.1632803850611@webmail.proxmox.com> > Given that the support to write on a tape is already there, there is any > plan to have something similar to write on a removable disk? Yes. First patches already on the list... From t.lamprecht at proxmox.com Tue Sep 28 06:38:52 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 28 Sep 2021 06:38:52 +0200 Subject: [PVE-User] Write PBS backup on removable drives In-Reply-To: References: Message-ID: Hi, On 27.09.21 17:53, Simone Piccardi wrote: > I'm starting using PBS for Proxmox VM backups, but I'd like to have an option to save a dump in a removable drive, that this days are much cheaper than tapes, for offsite storage. > > Given that the support to write on a tape is already there, there is any plan to have something similar to write on a removable disk? That is already possible with some small workaround, as Gilberto mentioned, i.e., any storage device with a FS Linux can use can be made a datastore. Then you can trigger syncs once you plugged it. But, there's some work undergoing that will make this more integrated, see: https://bugzilla.proxmox.com/show_bug.cgi?id=3156 cheers, Thomas From arengifoc at gmail.com Tue Sep 28 19:38:39 2021 From: arengifoc at gmail.com (Angel Rengifo Cancino) Date: Tue, 28 Sep 2021 12:38:39 -0500 Subject: [PVE-User] Kernel documentation for building modules Message-ID: Hello guys: I'm running Proxmox VE 7 with kernel 5.11.22-4-pve. Everytime I try to build a module or even when running "make oldconfig" I get an error message like: Kconfig:36: can't open file "Documentation/Kconfig" I tried searching different Proxmox packages but I couldn't find one that contains that directory and file. Has anyone been able to build modules on Proxmox 7? From stephane.caminade at ias.u-psud.fr Wed Sep 29 17:34:47 2021 From: stephane.caminade at ias.u-psud.fr (Stephane Caminade) Date: Wed, 29 Sep 2021 17:34:47 +0200 Subject: [PVE-User] backup to PBS: unable to acquire lock on snapshot directory Message-ID: <93f43c26-001b-13f7-ef13-a4e484f7a0d3@ias.u-psud.fr> Hello, I have a question regarding a problem I encounter when running backups with PBS. *Context*: A cluster of 9 Proxmox nodes running 84 VM, and backing up to a remote PBS *Versions*: pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve) with proxmox-backup-client 1.1.12-1 proxmox-backup-server 2.0.10-1 running version: 2.0.9 *Problem*: Regularly, some of my VMs (3 regularly, and others randomly - or I have not yet determined the common factor -) will fail their backup with the following error (example of a regular failing one, but similar message for all the failed backups): /ERROR: Backup of VM 297 failed - VM 297 qmp command 'backup' failed - backup connect failed: command error: unable to acquire lock on snapshot directory "/mnt/inf-proxmox-bkp/PBS-STORAGE/vm/297/2021-09-29T10:01:43Z" - base snapshot is already locked by another operation/ Any pointers as to where I could look for the source of this problem? Best regards, Stephane From gregor at aeppelbroe.de Thu Sep 30 15:07:25 2021 From: gregor at aeppelbroe.de (Gregor Burck) Date: Thu, 30 Sep 2021 15:07:25 +0200 Subject: [PVE-User] proxmox-restore - performance issues In-Reply-To: <20210917112921.EGroupware.VUEdiWgpWw30Fp0tioEjJWr@heim.aeppelbroe.de> References: <20210917112921.EGroupware.VUEdiWgpWw30Fp0tioEjJWr@heim.aeppelbroe.de> Message-ID: <20210930150725.EGroupware.6J5OaeEYpIws136ZihOWnZX@heim.aeppelbroe.de> Hi, I made some other test with the same machine but an other proccessor. I use an Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, wich has a higher frequency. The restore rate for an single job dind't change. Any idea what it could be? Bye Gregor From d.csapak at proxmox.com Thu Sep 30 15:24:44 2021 From: d.csapak at proxmox.com (Dominik Csapak) Date: Thu, 30 Sep 2021 15:24:44 +0200 Subject: [PVE-User] proxmox-restore - performance issues In-Reply-To: <20210930150725.EGroupware.6J5OaeEYpIws136ZihOWnZX@heim.aeppelbroe.de> References: <20210917112921.EGroupware.VUEdiWgpWw30Fp0tioEjJWr@heim.aeppelbroe.de> <20210930150725.EGroupware.6J5OaeEYpIws136ZihOWnZX@heim.aeppelbroe.de> Message-ID: On 9/30/21 15:07, Gregor Burck wrote: > Hi, > > I made some other test with the same machine but an other proccessor. > > I use an Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, wich has a higher > frequency. > > The restore rate for an single job dind't change. > > Any idea what it could be? > > Bye > > Gregor > > hi, can you tell us a bit more about the setup and test? is the target storage able to handle more than 50MB/s? how do you measure the 50MB/s? with kind regards Dominik