From pve at junkyard.4t2.com Mon Nov 1 14:58:53 2021 From: pve at junkyard.4t2.com (Tom Weber) Date: Mon, 1 Nov 2021 14:58:53 +0100 Subject: [PVE-User] How to secure the PBS Encryption Key Message-ID: <31206b58-3a29-ef80-3534-13c556918832@4t2.com> Hello, in https://forum.proxmox.com/threads/cant-set-an-symlink-in-etc-pve-for-zfs-encryption.96934/ fireon describes his problem securing the PBS encryption key. I think his solution is only a workaround. Suppose I encrypt local VM/Data storage on a node (without the / beeing unencrypted for ease of booting/remote management), I end up with a PBS Encryption Key lying around in clear that anyone who can get hands on the machine can get. Now all it needs is access to a remote PBS with the synced encrypted Backups to get all the protected data that was in the VMs/CTs lying on encrypted storage of the orignal node. This is probably not a Problem of PVE or PBS on their own. But in combination I think it's a weakness. "Stealing" hardware should not give you such cleartext keys. Any idea how to circumvent this? My first idea was the same direction as fireon's by symlinking the key from some secure space - but this obviously doesn't work. Some way to manually unlock the key after booting the node or maybe an area (folder) in /etc/pve/ that'd need unlocking for storing such information - though that's probably quite some development effort? Best, Tom From me at bennetgallein.de Mon Nov 1 17:33:41 2021 From: me at bennetgallein.de (Bennet Gallein) Date: Mon, 1 Nov 2021 17:33:41 +0100 Subject: [PVE-User] Moving Disks via API Message-ID: <067422B6-8CD0-4173-B1FF-9FD719BA78AE@getmailspring.com> Hello, I'm currently evaluating Proxmox for block-storage. For this, I need to move disks between VMs via Code. From my understanding there currently is no way to do this, other than writing some custom scripts on the Proxmox-Host itself, which is not a viable option for fast-growing clusters. Is there currently being work put into developing an API to automate this? If not, is there something I can help with to push this forward? Rel.: https://pve.proxmox.com/wiki/Moving_disk_image_from_one_KVM_machine_to_another Best Regards, Freundliche Gr??e aus L?neburg, Bennet Gallein Bennet Gallein IT Solutions E: me at bennetgallein.de (mailto:me at bennetgallein.de) A: Elbinger Str. 15 (https://maps.google.com/?q=Elbinger%20Str.%2015) W: https://bennetgallein.de From mir at miras.org Mon Nov 1 17:38:50 2021 From: mir at miras.org (Michael Rasmussen) Date: Mon, 1 Nov 2021 17:38:50 +0100 Subject: [PVE-User] Moving Disks via API In-Reply-To: <067422B6-8CD0-4173-B1FF-9FD719BA78AE@getmailspring.com> References: <067422B6-8CD0-4173-B1FF-9FD719BA78AE@getmailspring.com> Message-ID: <20211101173850.0fa09394@sleipner.datanom.net> On Mon, 1 Nov 2021 17:33:41 +0100 Bennet Gallein wrote: > Hello, > > fast-growing clusters. Is there currently being work put into > developing an API to automate this? If not, is there something I can > help with to push this forward? Rel.: What is wrong with this? https://pve.proxmox.com/pve-docs/api-viewer/index.html -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E mir datanom net https://pgp.key-server.io/pks/lookup?search=0xE501F51C mir miras org https://pgp.key-server.io/pks/lookup?search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: Kirkland, Illinois, law forbids bees to fly over the village or through any of its streets. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From me at bennetgallein.de Mon Nov 1 17:56:03 2021 From: me at bennetgallein.de (Bennet Gallein) Date: Mon, 1 Nov 2021 17:56:03 +0100 Subject: [PVE-User] Moving Disks via API In-Reply-To: References: Message-ID: <757A44A4-2EDF-4077-8DCB-06CD6E8022D7@getmailspring.com> Hello Michael, nothing is wrong with the API-viewer, I like it. There just is no API to move a disk from one VM to another and I was wondering if someone using PVE in production has found a way to automate this without modifying PVE source-code or running extra software on the PVE host. Best Regards, Freundliche Gr??e aus L?neburg, Bennet Gallein Bennet Gallein IT Solutions E: me at bennetgallein.de (mailto:me at bennetgallein.de) A: Elbinger Str. 15 (https://maps.google.com/?q=Elbinger%20Str.%2015) W: https://bennetgallein.de On Nov. 1 2021, at 5:38 pm, Michael Rasmussen via pve-user wrote: > On Mon, 1 Nov 2021 17:33:41 +0100 > Bennet Gallein wrote: > > > Hello, > > > > fast-growing clusters. Is there currently being work put into > > developing an API to automate this? If not, is there something I can > > help with to push this forward? Rel.: > What is wrong with this? > https://pve.proxmox.com/pve-docs/api-viewer/index.html > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E > mir datanom net > https://pgp.key-server.io/pks/lookup?search=0xE501F51C > mir miras org > https://pgp.key-server.io/pks/lookup?search=0xE3E80917 > -------------------------------------------------------------- > /usr/games/fortune -es says: > Kirkland, Illinois, law forbids bees to fly over the village or through > any of its streets. > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gaio at lilliput.linux.it Mon Nov 1 21:43:33 2021 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Mon, 1 Nov 2021 21:43:33 +0100 Subject: [PVE-User] Bullseye LXC and logrotate... In-Reply-To: ; from SmartGate on Mon, Nov 01, 2021 at 22:06:01PM +0100 References: <90vs4i-hle.ln1@hermione.lilliput.linux.it> Message-ID: Mandi! Arjen via pve-user In chel di` si favelave... Only for the sake of google... >> Thanks, i can confirm that fixed. It is NOT fixed, changing logrotate systemd unit configuration does nothing. > According to this forum post[1] by one of the Proxmox staff, enabling nesting is the way forward. > [1] https://forum.proxmox.com/threads/lxc-container-upgrade-to-bullseye-slow-login-and-apparmor-errors.93064/post-409030 Bingo! This works! ;-) Thanks. -- The number of UNIX installations has grown to 10, with more expected. (_The UNIX Programmer's Manual_, Second Edition, June 1972) From d.csapak at proxmox.com Tue Nov 2 08:47:38 2021 From: d.csapak at proxmox.com (Dominik Csapak) Date: Tue, 2 Nov 2021 08:47:38 +0100 Subject: [PVE-User] Moving Disks via API In-Reply-To: <757A44A4-2EDF-4077-8DCB-06CD6E8022D7@getmailspring.com> References: <757A44A4-2EDF-4077-8DCB-06CD6E8022D7@getmailspring.com> Message-ID: <6f38c9ed-7f39-964e-db6c-c88da7561bb7@proxmox.com> On 11/1/21 17:56, Bennet Gallein wrote: > Hello Michael, > > nothing is wrong with the API-viewer, I like it. There just is no API to move a disk from one VM to another and I was wondering if someone using PVE in production has found a way to automate this without modifying PVE source-code or running extra software on the PVE host. > Best Regards, > Freundliche Gr??e aus L?neburg, > Bennet Gallein > Bennet Gallein IT Solutions hi, there is currently a patchset on the devel list that does what you want (i think) https://lists.proxmox.com/pipermail/pve-devel/2021-October/050491.html it's not yet merged (and i think there will be a new version) kind regards Dominik From me at bennetgallein.de Tue Nov 2 09:02:50 2021 From: me at bennetgallein.de (Bennet Gallein) Date: Tue, 2 Nov 2021 09:02:50 +0100 (CET) Subject: [PVE-User] Moving Disks via API In-Reply-To: <6f38c9ed-7f39-964e-db6c-c88da7561bb7@proxmox.com> References: <757A44A4-2EDF-4077-8DCB-06CD6E8022D7@getmailspring.com> <6f38c9ed-7f39-964e-db6c-c88da7561bb7@proxmox.com> Message-ID: <1608874848.285220.1635840170613@ox.hosteurope.de> Hello Dominik, even though i'm on the devel list I somehow missed that. But yes, that is exactly what I'm looking for! Thanks, I'll keep myself updated! Best Regards, Bennet > Dominik Csapak hat am 02.11.2021 08:47 geschrieben: > > > On 11/1/21 17:56, Bennet Gallein wrote: > > Hello Michael, > > > > nothing is wrong with the API-viewer, I like it. There just is no API to move a disk from one VM to another and I was wondering if someone using PVE in production has found a way to automate this without modifying PVE source-code or running extra software on the PVE host. > > Best Regards, > > Freundliche Gr??e aus L?neburg, > > Bennet Gallein > > Bennet Gallein IT Solutions > > hi, > > there is currently a patchset on the devel list that does > what you want (i think) > > https://lists.proxmox.com/pipermail/pve-devel/2021-October/050491.html > > it's not yet merged (and i think there will be a new version) > > kind regards > Dominik > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From adamb at medent.com Fri Nov 5 14:23:06 2021 From: adamb at medent.com (Adam Boyhan) Date: Fri, 5 Nov 2021 13:23:06 +0000 Subject: [PVE-User] Encrypted ZFS Pool's & Replication Message-ID: Are there any plans or ETA to allow ZFS replication with encrypted ZFS datasets? This seems like a pretty big aspect to be missing. This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From pve-user at jugra.de Fri Nov 5 15:31:36 2021 From: pve-user at jugra.de (pve-user at jugra.de) Date: Fri, 5 Nov 2021 15:31:36 +0100 Subject: [PVE-User] Encrypted ZFS Pool's & Replication In-Reply-To: References: Message-ID: +1 From f.thommen at dkfz-heidelberg.de Mon Nov 8 13:07:10 2021 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Mon, 8 Nov 2021 13:07:10 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? Message-ID: Dear all, we are doing daily backups of every VM in our PVE cluster to the internal Ceph storage (snapshot mode, ZSTD compression). However one of the VMs takes six to seven hours for the backup, which is unbearable and also interferes with other backup processes. The delay is probably due to a 50% filled 500G vdisk attached to the VM. Is there a way to exclude this specific vdisk from the PVE backup schedule, while still retaining the rest of the VM in the backup? We don't have PBS in place and will not have it in the near future for different reasons. So PBS would not be an option for us. Cheers, Frank From elacunza at binovo.es Mon Nov 8 13:22:35 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 8 Nov 2021 13:22:35 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: References: Message-ID: <5605a4ec-f7b9-5344-c0ee-c632cc0ee1af@binovo.es> Hi Frank, El 8/11/21 a las 13:07, Frank Thommen escribi?: > > Dear all, > > we are doing daily backups of every VM in our PVE cluster to the > internal Ceph storage (snapshot mode, ZSTD compression).? However one > of the VMs takes six to seven hours for the backup, which is > unbearable and also interferes with other backup processes.? The delay > is probably due to a 50% filled 500G vdisk attached to the VM.? Is > there a way to exclude this specific vdisk from the PVE backup > schedule, while still retaining the rest of the VM in the backup? > > We don't have PBS in place and will not have it in the near future for > different reasons. So PBS would not be an option for us. > Yes, in VM Hardware config tab, edit vdisk and uncheck "Backup" in advanced. Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From nada at verdnatura.es Mon Nov 8 13:33:14 2021 From: nada at verdnatura.es (nada) Date: Mon, 08 Nov 2021 13:33:14 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: References: Message-ID: <2bdf9f309bb88dfba0012b1bbae5386d@verdnatura.es> hi Frank you can exclude CT/QM or some path see details at man # man vzdump example via webGUI cluster > backup > edit > selection > ... exclude some VM ... via CLI 0 2 * * 6 root vzdump --exclude 104 --mode snapshot --quiet 1 --maxfiles 1 --storage backup --compress lzo --mailnotification always --mailto admin at some.where best regards Nada On 2021-11-08 13:07, Frank Thommen wrote: > Dear all, > > we are doing daily backups of every VM in our PVE cluster to the > internal Ceph storage (snapshot mode, ZSTD compression). However one > of the VMs takes six to seven hours for the backup, which is > unbearable and also interferes with other backup processes. The delay > is probably due to a 50% filled 500G vdisk attached to the VM. Is > there a way to exclude this specific vdisk from the PVE backup > schedule, while still retaining the rest of the VM in the backup? > > We don't have PBS in place and will not have it in the near future for > different reasons. So PBS would not be an option for us. > > Cheers, Frank > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From f.thommen at dkfz-heidelberg.de Mon Nov 8 14:22:30 2021 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Mon, 8 Nov 2021 14:22:30 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: References: Message-ID: <74df2d30-721d-7172-389d-c7993b320c66@dkfz-heidelberg.de> Hi Eneko, On 08.11.21 13:22, Eneko Lacunza via pve-user wrote: > Hi Frank, > > El 8/11/21 a las 13:07, Frank Thommen escribi?: >> >> Dear all, >> >> we are doing daily backups of every VM in our PVE cluster to the >> internal Ceph storage (snapshot mode, ZSTD compression).? However one >> of the VMs takes six to seven hours for the backup, which is >> unbearable and also interferes with other backup processes.? The delay >> is probably due to a 50% filled 500G vdisk attached to the VM.? Is >> there a way to exclude this specific vdisk from the PVE backup >> schedule, while still retaining the rest of the VM in the backup? >> >> We don't have PBS in place and will not have it in the near future for >> different reasons. So PBS would not be an option for us. >> > > Yes, in VM Hardware config tab, edit vdisk and uncheck "Backup" in > advanced. Thanks a lot for the hint. I should probably look more often into the advanced configuration settings :-) Frank > > Cheers > > Eneko Lacunza From f.thommen at dkfz-heidelberg.de Mon Nov 8 14:25:24 2021 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Mon, 8 Nov 2021 14:25:24 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: <2bdf9f309bb88dfba0012b1bbae5386d@verdnatura.es> References: <2bdf9f309bb88dfba0012b1bbae5386d@verdnatura.es> Message-ID: Hi Nada, thanks, but this would exclude the complete VM from the backup (we have that for a few special VMs). However we wanted to exclude only one specific vdisk of a specific VM and not the VM itself. I have implemented Eneko's suggestion (see the previous mail in this thread). Cheers, Frank On 08.11.21 13:33, nada wrote: > hi Frank > you can exclude CT/QM or some path see details at man > # man vzdump > > example > via webGUI > ? cluster > backup > edit > selection > ... exclude some VM ... > via CLI > ? 0 2 * * 6??? root??? vzdump --exclude 104 --mode snapshot --quiet 1 > --maxfiles 1 --storage backup --compress lzo --mailnotification always > --mailto admin at some.where > > best regards > Nada > > > On 2021-11-08 13:07, Frank Thommen wrote: >> Dear all, >> >> we are doing daily backups of every VM in our PVE cluster to the >> internal Ceph storage (snapshot mode, ZSTD compression).? However one >> of the VMs takes six to seven hours for the backup, which is >> unbearable and also interferes with other backup processes.? The delay >> is probably due to a 50% filled 500G vdisk attached to the VM.? Is >> there a way to exclude this specific vdisk from the PVE backup >> schedule, while still retaining the rest of the VM in the backup? >> >> We don't have PBS in place and will not have it in the near future for >> different reasons. So PBS would not be an option for us. >> >> Cheers, Frank >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From drbidwell at gmail.com Mon Nov 8 14:37:00 2021 From: drbidwell at gmail.com (Daniel Bidwell) Date: Mon, 08 Nov 2021 08:37:00 -0500 Subject: [PVE-User] Can I run a cluster with 2 different architectures Message-ID: <62aef4f9bacebb0f50ef4983c80123eb11b63fe4.camel@gmail.com> I have a small cluster built on intel desktops and I want to build a raspberry PI cluster and migrate the ceph file system from the intel cluster to the PI cluster. Can I add the PI machines to the intel cluster, migrate the data volume to the PI machines and remove the intel machines? I am not intending to migrate any vms or containers. -- Daniel Bidwell From f.thommen at dkfz-heidelberg.de Tue Nov 9 19:47:44 2021 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Tue, 9 Nov 2021 19:47:44 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: <74df2d30-721d-7172-389d-c7993b320c66@dkfz-heidelberg.de> References: <74df2d30-721d-7172-389d-c7993b320c66@dkfz-heidelberg.de> Message-ID: <21de5876-dba4-bc42-84e6-cfa8fbab2154@dkfz-heidelberg.de> Hello Eneko, On 08.11.21 14:22, Frank Thommen wrote: > Hi Eneko, > > On 08.11.21 13:22, Eneko Lacunza via pve-user wrote: >> Hi Frank, >> >> El 8/11/21 a las 13:07, Frank Thommen escribi?: >>> >>> Dear all, >>> >>> we are doing daily backups of every VM in our PVE cluster to the >>> internal Ceph storage (snapshot mode, ZSTD compression).? However one >>> of the VMs takes six to seven hours for the backup, which is >>> unbearable and also interferes with other backup processes.? The >>> delay is probably due to a 50% filled 500G vdisk attached to the VM. >>> Is there a way to exclude this specific vdisk from the PVE backup >>> schedule, while still retaining the rest of the VM in the backup? >>> >>> We don't have PBS in place and will not have it in the near future >>> for different reasons. So PBS would not be an option for us. >>> >> >> Yes, in VM Hardware config tab, edit vdisk and uncheck "Backup" in >> advanced. > > Thanks a lot for the hint.? I should probably look more often into the > advanced configuration settings :-) Just as a final confirmation: Your suggestion worked like a charm. The backup time came down from >6 hours to 8 minutes :-). Thanks again Frank > > Frank > >> >> Cheers >> >> Eneko Lacunza > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Nov 10 09:21:08 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 10 Nov 2021 09:21:08 +0100 Subject: [PVE-User] Excluding individual vdisks from PVE backups? In-Reply-To: <21de5876-dba4-bc42-84e6-cfa8fbab2154@dkfz-heidelberg.de> References: <74df2d30-721d-7172-389d-c7993b320c66@dkfz-heidelberg.de> <21de5876-dba4-bc42-84e6-cfa8fbab2154@dkfz-heidelberg.de> Message-ID: <4cd7c68d-d861-a889-c6fc-4c4769db84a3@binovo.es> Hi Frank, El 9/11/21 a las 19:47, Frank Thommen escribi?: > >>>> we are doing daily backups of every VM in our PVE cluster to the >>>> internal Ceph storage (snapshot mode, ZSTD compression).? However >>>> one of the VMs takes six to seven hours for the backup, which is >>>> unbearable and also interferes with other backup processes.? The >>>> delay is probably due to a 50% filled 500G vdisk attached to the >>>> VM. Is there a way to exclude this specific vdisk from the PVE >>>> backup schedule, while still retaining the rest of the VM in the >>>> backup? >>>> >>>> We don't have PBS in place and will not have it in the near future >>>> for different reasons. So PBS would not be an option for us. >>>> >>> >>> Yes, in VM Hardware config tab, edit vdisk and uncheck "Backup" in >>> advanced. >> >> Thanks a lot for the hint.? I should probably look more often into >> the advanced configuration settings :-) > > Just as a final confirmation: Your suggestion worked like a charm.? > The backup time came down from >6 hours to 8 minutes :-). Glad this works for you. We use this for some VMs too :) Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From jmr.richardson at gmail.com Tue Nov 16 04:25:55 2021 From: jmr.richardson at gmail.com (JR Richardson) Date: Mon, 15 Nov 2021 21:25:55 -0600 Subject: [PVE-User] Removing Node from Cluster A and adding to Cluster B Message-ID: <000a01d7da99$aa9a2840$ffce78c0$@gmail.com> Hey Folks, Quick question I hope, I have two productions cluster running PVE6 and working fine, 13 node in one, 4 in another. Bother clusters have separate switch VLANs and IP space so they are logically isolated. I have a need to shift a couple of nodes from my larger cluster to my smaller cluster. My question is after I remove the nodes from one cluster, I'm reading best practice is to re-install PVE from scratch before adding back to the cluster. But this is to add it back to the same cluster or is this best practice regardless of adding to new cluster as well? Thanks. JR From elacunza at binovo.es Tue Nov 16 09:20:37 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 16 Nov 2021 09:20:37 +0100 Subject: [PVE-User] Removing Node from Cluster A and adding to Cluster B In-Reply-To: <000a01d7da99$aa9a2840$ffce78c0$@gmail.com> References: <000a01d7da99$aa9a2840$ffce78c0$@gmail.com> Message-ID: Hi JR, El 16/11/21 a las 4:25, JR Richardson escribi?: > Quick question I hope, I have two productions cluster running PVE6 and > working fine, 13 node in one, 4 in another. Bother clusters have separate > switch VLANs and IP space so they are logically isolated. I have a need to > shift a couple of nodes from my larger cluster to my smaller cluster. > > My question is after I remove the nodes from one cluster, I'm reading best > practice is to re-install PVE from scratch before adding back to the > cluster. But this is to add it back to the same cluster or is this best > practice regardless of adding to new cluster as well? As I understand it, reinstall is recommended for both cases. I don't think trying otherwise is worth it, multiple issues can arise. Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From jmr.richardson at gmail.com Wed Nov 17 02:09:42 2021 From: jmr.richardson at gmail.com (JR Richardson) Date: Tue, 16 Nov 2021 19:09:42 -0600 Subject: [PVE-User] Removing Node from Cluster A and adding to Cluster B Message-ID: <000001d7db4f$cde671d0$69b35570$@gmail.com> Date: Tue, 16 Nov 2021 09:20:37 +0100 From: Eneko Lacunza To: pve-user at lists.proxmox.com Subject: Re: [PVE-User] Removing Node from Cluster A and adding to Cluster B Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed Hi JR, El 16/11/21 a las 4:25, JR Richardson escribi?: > Quick question I hope, I have two productions cluster running PVE6 and > working fine, 13 node in one, 4 in another. Bother clusters have > separate switch VLANs and IP space so they are logically isolated. I > have a need to shift a couple of nodes from my larger cluster to my smaller cluster. > > My question is after I remove the nodes from one cluster, I'm reading > best practice is to re-install PVE from scratch before adding back to > the cluster. But this is to add it back to the same cluster or is this > best practice regardless of adding to new cluster as well? As I understand it, reinstall is recommended for both cases. I don't think trying otherwise is worth it, multiple issues can arise. Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Hi Eneko, Appreciate the feedback, that is precisely what I did and didn't have any issues. Thanks. JR From devzero at web.de Wed Nov 17 08:22:10 2021 From: devzero at web.de (Roland privat) Date: Wed, 17 Nov 2021 08:22:10 +0100 Subject: [PVE-User] Removing Node from Cluster A and adding to Cluster B In-Reply-To: <000001d7db4f$cde671d0$69b35570$@gmail.com> References: <000001d7db4f$cde671d0$69b35570$@gmail.com> Message-ID: <1D371F61-AA60-483E-BFC3-B68179844482@web.de> i?m curious which issues can arise and whats so difficult in removing and re-joining. i would find it useful to to migrate hosts between cluster, ad for now there is no easy or quick method to migrate virtual machines between clusters. roland Von meinem iPhone gesendet > Am 17.11.2021 um 02:10 schrieb JR Richardson : > > ?Date: Tue, 16 Nov 2021 09:20:37 +0100 > From: Eneko Lacunza > To: pve-user at lists.proxmox.com > Subject: Re: [PVE-User] Removing Node from Cluster A and adding to > Cluster B > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Hi JR, > > El 16/11/21 a las 4:25, JR Richardson escribi?: >> Quick question I hope, I have two productions cluster running PVE6 and >> working fine, 13 node in one, 4 in another. Bother clusters have >> separate switch VLANs and IP space so they are logically isolated. I >> have a need to shift a couple of nodes from my larger cluster to my > smaller cluster. >> >> My question is after I remove the nodes from one cluster, I'm reading >> best practice is to re-install PVE from scratch before adding back to >> the cluster. But this is to add it back to the same cluster or is this >> best practice regardless of adding to new cluster as well? > > As I understand it, reinstall is recommended for both cases. I don't think > trying otherwise is worth it, multiple issues can arise. > > Cheers > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Hi Eneko, > > Appreciate the feedback, that is precisely what I did and didn't have any > issues. > > Thanks. > > JR > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From martin at proxmox.com Wed Nov 17 11:32:12 2021 From: martin at proxmox.com (Martin Maurer) Date: Wed, 17 Nov 2021 11:32:12 +0100 Subject: [PVE-User] Proxmox VE 7.1 released! Message-ID: <82d88ac5-7548-8df4-9499-ce510d07dad6@proxmox.com> Hi all, we're excited to announce the release of Proxmox Virtual Environment 7.1. It's based on Debian 11.1 "Bullseye" but using a newer Linux kernel 5.13, QEMU 6.1, LXC 4.0, Ceph 16.2.6, and OpenZFS 2.1. and countless enhancements and bugfixes. Proxmox Virtual Environment brings several new functionalities and many improvements for management tasks in the web interface: support for Windows 11 including TPM, enhanced creation wizard for VM/container, ability to set backup retention policies per backup job in the GUI, and a new scheduler daemon supporting more flexible schedules.. Here is a selection of the highlights - Debian 11.1 "Bullseye", but using a newer Linux kernel 5.13 - LXC 4.0, Ceph 16.2.6, QEMU 6.1, and OpenZFS 2.1 - VM wizard with defaults for Windows 11 (q35, OVMF, TPM) - New backup scheduler daemon for flexible scheduling options - Backup retention - Protection flag for backups - Two-factor Authentication: WebAuthn, recovery keys, multiple factors for a single account - New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux - and many more enhancements, bugfixes, etc. As always, we have included countless bugfixes and improvements on many places; see the release notes for all details. Release notes https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.1 Press release https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-1-released Video tutorial https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-1 Download https://www.proxmox.com/en/downloads Alternate ISO download: http://download.proxmox.com/iso Documentation https://pve.proxmox.com/pve-docs Community Forum https://forum.proxmox.com Bugtracker https://bugzilla.proxmox.com Source code https://git.proxmox.com We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting! FAQ Q: Can I upgrade Proxmox VE 7.0 to 7.1 via GUI? A: Yes. Q: Can I upgrade Proxmox VE 6.4 to 7.1 with apt? A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 Q: Can I install Proxmox VE 7.1 on top of Debian 11.1 "Bullseye"? A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.1 with Ceph Octopus/Pacific? A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.1, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation: https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific Q: Where can I get more information about feature updates? A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter. -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com From leithner at itronic.at Thu Nov 18 19:02:44 2021 From: leithner at itronic.at (Harald Leithner) Date: Thu, 18 Nov 2021 19:02:44 +0100 Subject: [PVE-User] Ceph 16 upgrade warning also valid for proxmox users? Message-ID: Hi, the ceph release packages site has a big red warning not to upgrade from earlier versions to Pacific. So when I haven't updated to Pacific yet, should I wait or is it safe to upgrade for what ever reason? https://docs.ceph.com/en/latest/releases/pacific/ -- DATE: 01 NOV 2021. DO NOT UPGRADE TO CEPH PACIFIC FROM AN OLDER VERSION. A recently-discovered bug (https://tracker.ceph.com/issues/53062) can cause data corruption. This bug occurs during OMAP format conversion for clusters that are updated to Pacific. New clusters are not affected by this bug. The trigger for this bug is BlueStore?s repair/quick-fix functionality. This bug can be triggered in two known ways: manually via the ceph-bluestore-tool, or automatically, by OSD if bluestore_fsck_quick_fix_on_mount is set to true. The fix for this bug is expected to be available in Ceph v16.2.7. DO NOT set bluestore_quick_fix_on_mount to true. If it is currently set to true in your configuration, immediately set it to false. DO NOT run ceph-bluestore-tool?s repair/quick-fix commands. -- thanks Harald -- ITronic Harald Leithner Wiedner Hauptstra?e 120/5.1, 1050 Wien, Austria Tel: +43-1-545 0 604 Mobil: +43-699-123 78 4 78 Mail: leithner at itronic.at | itronic.at -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From lindsay.mathieson at gmail.com Fri Nov 19 02:26:35 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 19 Nov 2021 11:26:35 +1000 Subject: [PVE-User] Setting bluestore_quick_fix_on_mount Message-ID: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> Ok, dump question, lots of warnings about ensuring bluestore_quick_fix_on_mount is set to false before upgrading (I'm upgrading from 6.4 to 7.1) How do I check what *bluestore_quick_fix_on_mount* is set to, and how do I set it to false? From uwe.sauter.de at gmail.com Fri Nov 19 08:18:04 2021 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Fri, 19 Nov 2021 08:18:04 +0100 Subject: [PVE-User] Setting bluestore_quick_fix_on_mount In-Reply-To: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> References: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> Message-ID: Short: # ceph config get osd bluestore_fsck_quick_fix_on_mount Long: There is documentation at [1] regarding the upgrade process Octopus -> Pacific where this issue is menitoned as well as above command. Regards, Uwe [1] https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific Am 19.11.21 um 02:26 schrieb Lindsay Mathieson: > Ok, dump question, lots of warnings about ensuring bluestore_quick_fix_on_mount is set to false > before upgrading (I'm upgrading from 6.4 to 7.1) > > > How do I check what *bluestore_quick_fix_on_mount* is set to, and how do I set it to false? > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From uwe.sauter.de at gmail.com Fri Nov 19 08:18:04 2021 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Fri, 19 Nov 2021 08:18:04 +0100 Subject: [PVE-User] Setting bluestore_quick_fix_on_mount In-Reply-To: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> References: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> Message-ID: Short: # ceph config get osd bluestore_fsck_quick_fix_on_mount Long: There is documentation at [1] regarding the upgrade process Octopus -> Pacific where this issue is menitoned as well as above command. Regards, Uwe [1] https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific Am 19.11.21 um 02:26 schrieb Lindsay Mathieson: > Ok, dump question, lots of warnings about ensuring bluestore_quick_fix_on_mount is set to false > before upgrading (I'm upgrading from 6.4 to 7.1) > > > How do I check what *bluestore_quick_fix_on_mount* is set to, and how do I set it to false? > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From d.csapak at proxmox.com Fri Nov 19 08:42:56 2021 From: d.csapak at proxmox.com (Dominik Csapak) Date: Fri, 19 Nov 2021 08:42:56 +0100 Subject: [PVE-User] Ceph 16 upgrade warning also valid for proxmox users? In-Reply-To: References: Message-ID: On 11/18/21 19:02, Harald Leithner wrote: > Hi, > > the ceph release packages site has a big red warning not to upgrade from > earlier versions to Pacific. > > So when I haven't updated to Pacific yet, should I wait or is it safe to > upgrade for what ever reason? > > https://docs.ceph.com/en/latest/releases/pacific/ Hi, this bug is already mentioned in our upgrade guide: https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific#Check_if_bluestore_fsck_quick_fix_on_mount_is_disabled You can ofc run Octopus for now, if you want to play it safe and wait for a fix in Pacific (though AFAIK if that option is disabled the upgrade should be ok, but maybe i am missing something) kind regards From elacunza at binovo.es Fri Nov 19 09:36:44 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 19 Nov 2021 09:36:44 +0100 Subject: Ceph 16 upgrade warning also valid for proxmox users? In-Reply-To: References: Message-ID: <03881cfb-f803-e0d3-374f-6106c1d674dc@binovo.es> Hi Harald, We have 2 clusters upgraded to Pacific, they're working well although we had an issue triggered by Proxmox 7 cluster instability and Ceph didn't recover very well. (see my previous posts) You won't hit data-lossing bug if you don't trigger it after upgrading, that is the current understanding here and on Ceph user mailing list. Anyway I recommend not to upgrade to Pacific yet, we stopped upgrading clusters we manage until Pacific is more mature. Cheers El 18/11/21 a las 19:02, Harald Leithner escribi?: > Hi, > > the ceph release packages site has a big red warning not to upgrade > from earlier versions to Pacific. > > So when I haven't updated to Pacific yet, should I wait or is it safe > to upgrade for what ever reason? > > https://docs.ceph.com/en/latest/releases/pacific/ > > -- > DATE: 01 NOV 2021. > > DO NOT UPGRADE TO CEPH PACIFIC FROM AN OLDER VERSION. > > A recently-discovered bug (https://tracker.ceph.com/issues/53062) can > cause data corruption. This bug occurs during OMAP format conversion > for clusters that are updated to Pacific. New clusters are not > affected by this bug. > > The trigger for this bug is BlueStore?s repair/quick-fix > functionality. This bug can be triggered in two known ways: > > manually via the ceph-bluestore-tool, or > > automatically, by OSD if bluestore_fsck_quick_fix_on_mount is set to > true. > > The fix for this bug is expected to be available in Ceph v16.2.7. > > DO NOT set bluestore_quick_fix_on_mount to true. If it is currently > set to true in your configuration, immediately set it to false. > > DO NOT run ceph-bluestore-tool?s repair/quick-fix commands. > -- > > thanks > > Harald > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From leithner at itronic.at Fri Nov 19 09:46:45 2021 From: leithner at itronic.at (Harald Leithner) Date: Fri, 19 Nov 2021 09:46:45 +0100 Subject: [PVE-User] Ceph 16 upgrade warning also valid for proxmox users? In-Reply-To: References: Message-ID: <6d566e47-42cc-711e-60df-b9096f4df6c1@itronic.at> Hi, thanks Dominik for the heads up on the upgrade documentation. Also thanks to Eneko Lacunza for sharing his experience. Since I have no hurry to update we will wait till 16.2.7 is released. Harald Am 19.11.2021 um 08:42 schrieb Dominik Csapak: > On 11/18/21 19:02, Harald Leithner wrote: >> Hi, >> >> the ceph release packages site has a big red warning not to upgrade >> from earlier versions to Pacific. >> >> So when I haven't updated to Pacific yet, should I wait or is it safe >> to upgrade for what ever reason? >> >> https://docs.ceph.com/en/latest/releases/pacific/ > > Hi, > > this bug is already mentioned in our upgrade guide: > > https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific#Check_if_bluestore_fsck_quick_fix_on_mount_is_disabled > > > You can ofc run Octopus for now, if you want to play it safe and wait > for a fix in Pacific (though AFAIK if that option is disabled the > upgrade should be ok, but maybe i am missing something) > > kind regards > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- ITronic Harald Leithner Wiedner Hauptstra?e 120/5.1, 1050 Wien, Austria Tel: +43-1-545 0 604 Mobil: +43-699-123 78 4 78 Mail: leithner at itronic.at | itronic.at -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From elacunza at binovo.es Fri Nov 19 10:27:40 2021 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 19 Nov 2021 10:27:40 +0100 Subject: PVE 7 corosync more talkative? Message-ID: <48d25ea9-594b-96c2-e798-1340a1f3fe42@binovo.es> Hi all, We have upgraded 2 clusters to PVE 7, and we are seeing logs like these daily: Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] link: host: 4 link: 0 is down Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] link: host: 2 link: 0 is down Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] link: host: 3 link: 0 is down Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] link: host: 1 link: 0 is down Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 4 (passive) best link: 0 (pri: 1) Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 4 has no active links Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 2 (passive) best link: 0 (pri: 1) Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 2 has no active links Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 3 (passive) best link: 0 (pri: 1) Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 3 has no active links Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 1 (passive) best link: 0 (pri: 1) Nov 18 08:29:58 guadalupe corosync[2485]:?? [KNET? ] host: host: 1 has no active links Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] rx: host: 2 link: 0 is up Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] host: host: 2 (passive) best link: 0 (pri: 1) Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] rx: host: 1 link: 0 is up Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] rx: host: 3 link: 0 is up Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] host: host: 1 (passive) best link: 0 (pri: 1) Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] host: host: 3 (passive) best link: 0 (pri: 1) Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] rx: host: 4 link: 0 is up Nov 18 08:30:01 guadalupe corosync[2485]:?? [KNET? ] host: host: 4 (passive) best link: 0 (pri: 1) Nov 18 08:30:02 guadalupe corosync[2485]:?? [TOTEM ] Token has not been received in 3712 ms We didn't see that kind of logs before upgrading to PVE7, and other clusters with PVE6 and PVE5 don't have those either. Can this be that corosync in PVE7 is more verbose of has logging set higher? Thanks Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From lindsay.mathieson at gmail.com Fri Nov 19 14:08:04 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 19 Nov 2021 23:08:04 +1000 Subject: [PVE-User] Setting bluestore_quick_fix_on_mount In-Reply-To: References: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> Message-ID: <99b29338-530d-fb51-e7f7-632d4c501156@gmail.com> Brilliant, thanks Uwe nb. it was set to true, so just as well I checked :) On 19/11/2021 5:18 pm, Uwe Sauter wrote: > Short: > # ceph config get osd bluestore_fsck_quick_fix_on_mount > > Long: > There is documentation at [1] regarding the upgrade process Octopus -> Pacific where this issue is > menitoned as well as above command. > From lindsay.mathieson at gmail.com Fri Nov 19 14:08:04 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 19 Nov 2021 23:08:04 +1000 Subject: [PVE-User] Setting bluestore_quick_fix_on_mount In-Reply-To: References: <87d908a8-4604-9f31-20c5-dc5c279237d1@gmail.com> Message-ID: <99b29338-530d-fb51-e7f7-632d4c501156@gmail.com> Brilliant, thanks Uwe nb. it was set to true, so just as well I checked :) On 19/11/2021 5:18 pm, Uwe Sauter wrote: > Short: > # ceph config get osd bluestore_fsck_quick_fix_on_mount > > Long: > There is documentation at [1] regarding the upgrade process Octopus -> Pacific where this issue is > menitoned as well as above command. > From lindsay.mathieson at gmail.com Fri Nov 19 14:11:41 2021 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 19 Nov 2021 23:11:41 +1000 Subject: [PVE-User] Ceph 16 upgrade warning also valid for proxmox users? In-Reply-To: <6d566e47-42cc-711e-60df-b9096f4df6c1@itronic.at> References: <6d566e47-42cc-711e-60df-b9096f4df6c1@itronic.at> Message-ID: <81cf000c-23fa-b721-0258-5f13354216bc@gmail.com> On 19/11/2021 6:46 pm, Harald Leithner wrote: > Since I have no hurry to update we will wait till 16.2.7 is released. Yah, I thought it was part of upgrading to &, since it isn't I will wait also. Thanks everyone. - Lindsay From ralf.storm at konzept-is.de Tue Nov 23 14:04:57 2021 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Tue, 23 Nov 2021 14:04:57 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 Message-ID: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> Hello, I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be corrected, but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode setting" Anybody had this error before? Best regards -- Ralf Storm Systemadministrator Konzept Informationssysteme GmbH .... From uwe.sauter.de at gmail.com Tue Nov 23 14:14:13 2021 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Tue, 23 Nov 2021 14:14:13 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> Message-ID: Am 23.11.21 um 14:04 schrieb Ralf Storm: > Hello, > > I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be > corrected, > > but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode setting" > > Anybody had this error before? Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. Do you have issues accessing your storage? > > Best regards > From uwe.sauter.de at gmail.com Tue Nov 23 14:37:07 2021 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Tue, 23 Nov 2021 14:37:07 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> Message-ID: <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> Am 23.11.21 um 14:34 schrieb Ralf Storm: > Hello, > > > yes - I have a problem, as mentioned in my first mail, the system is stuck at boot with this message > and then after a few minutes reboots... Can you reboot with the previous kernel and check the firmware of the controller? dmesg | grep mpt | grep -i fw should show the firmware version. > > > Am 23/11/2021 um 14:14 schrieb Uwe Sauter: >> Am 23.11.21 um 14:04 schrieb Ralf Storm: >>> Hello, >>> >>> I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be >>> corrected, >>> >>> but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode >>> setting" >>> >>> Anybody had this error before? >> Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. >> >> Do you have issues accessing your storage? >> >>> Best regards >>> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From lists at benappy.com Tue Nov 23 20:58:33 2021 From: lists at benappy.com (ic) Date: Tue, 23 Nov 2021 20:58:33 +0100 Subject: [PVE-User] ARP issue Message-ID: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> Hi, I?m running PVE 7.0 on a bunch of servers. I noticed something strange. There is a vmbr2 containing one physical interface (ens19) with an IP (10.X.Y.Z/24). There is a vmbr1 containing NO physical interface with another IP (10.A.B.C/24) (outside of the range of vmbr2, even if this is irrelevant for this problem). Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). In an ?ip a? output, this mac address appears only in ens19 and vmbr2. vmbr1 has its own mac (different from the physical mac of ens19/vmbr2). What am I missing? In my setup I need vmbr2 to have the same IP on each physical host and not leak on the outside network so this is pretty annoying :( BR, ic From ralf.storm at konzept-is.de Wed Nov 24 11:24:38 2021 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Wed, 24 Nov 2021 11:24:38 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> Message-ID: Hello, Proxmox is starting again after upgrading the firmware of the 9211-8i controllers... And yes it did boot with the old Kernel, so the Firmware of the controllers was incompatible with thie new Kernel. Ralf Storm Systemadministrator Konzept Informationssysteme GmbH Am 23.11.2021 um 14:37 schrieb Uwe Sauter: > Am 23.11.21 um 14:34 schrieb Ralf Storm: >> Hello, >> >> >> yes - I have a problem, as mentioned in my first mail, the system is stuck at boot with this message >> and then after a few minutes reboots... > Can you reboot with the previous kernel and check the firmware of the controller? > > dmesg | grep mpt | grep -i fw > > should show the firmware version. > > >> >> Am 23/11/2021 um 14:14 schrieb Uwe Sauter: >>> Am 23.11.21 um 14:04 schrieb Ralf Storm: >>>> Hello, >>>> >>>> I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be >>>> corrected, >>>> >>>> but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode >>>> setting" >>>> >>>> Anybody had this error before? >>> Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. >>> >>> Do you have issues accessing your storage? >>> >>>> Best regards >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From stefan.radman at me.com Wed Nov 24 15:31:10 2021 From: stefan.radman at me.com (Stefan Radman) Date: Wed, 24 Nov 2021 15:31:10 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> Message-ID: Hi Ralf Would you mind to share the old and new firmware versions of your controller and your kernel version? It might help other PVE users to avoid the problem you have run into. Thank you Stefan > On Nov 24, 2021, at 11:24, Ralf Storm wrote: > > Hello, > > > Proxmox is starting again after upgrading the firmware of the 9211-8i controllers... > > And yes it did boot with the old Kernel, so the Firmware of the controllers was incompatible with thie new Kernel. > > > Ralf Storm > > > Systemadministrator > > > > Konzept Informationssysteme GmbH > > Am 23.11.2021 um 14:37 schrieb Uwe Sauter: >> Am 23.11.21 um 14:34 schrieb Ralf Storm: >>> Hello, >>> >>> >>> yes - I have a problem, as mentioned in my first mail, the system is stuck at boot with this message >>> and then after a few minutes reboots... >> Can you reboot with the previous kernel and check the firmware of the controller? >> >> dmesg | grep mpt | grep -i fw >> >> should show the firmware version. >> >> >>> >>> Am 23/11/2021 um 14:14 schrieb Uwe Sauter: >>>> Am 23.11.21 um 14:04 schrieb Ralf Storm: >>>>> Hello, >>>>> >>>>> I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be >>>>> corrected, >>>>> >>>>> but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode >>>>> setting" >>>>> >>>>> Anybody had this error before? >>>> Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. >>>> >>>> Do you have issues accessing your storage? >>>> >>>>> Best regards >>>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at lists.proxmox.com >>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From ralf.storm at konzept-is.de Wed Nov 24 15:45:07 2021 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Wed, 24 Nov 2021 15:45:07 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> Message-ID: <801379f0-658e-ee56-770f-86df05eeed76@konzept-is.de> Hello Stefan, I had the old Firmware (from2015?) 000..26 - the ri version not the it version - this did not work on the 5.13 Kernel - stuck at boot and server rebooting after a while the newest version i found was from 2016 000..27 , which works, it is hard to find any firmwares as the vendor does not supply it anymore, there are descriptions howto update and links to the firmware, but the links are dead... Has to be updated in efi shell from usb drive hope that helps, best regards Ralf Hi Ralf Would you mind to share the old and new firmware versions of your controller and your kernel version? It might help other PVE users to avoid the problem you have run into. Thank you Stefan > On Nov 24, 2021, at 11:24, Ralf Storm wrote: > > Hello, > > > Proxmox is starting again after upgrading the firmware of the 9211-8i controllers... > > And yes it did boot with the old Kernel, so the Firmware of the controllers was incompatible with thie new Kernel. > > > Ralf Storm > > > Systemadministrator > > > > Konzept Informationssysteme GmbH > > Am 23.11.2021 um 14:37 schrieb Uwe Sauter: >> Am 23.11.21 um 14:34 schrieb Ralf Storm: >>> Hello, >>> >>> >>> yes - I have a problem, as mentioned in my first mail, the system is stuck at boot with this message >>> and then after a few minutes reboots... >> Can you reboot with the previous kernel and check the firmware of the controller? >> >> dmesg | grep mpt | grep -i fw >> >> should show the firmware version. >> >> >>> Am 23/11/2021 um 14:14 schrieb Uwe Sauter: >>>> Am 23.11.21 um 14:04 schrieb Ralf Storm: >>>>> Hello, >>>>> >>>>> I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs to be >>>>> corrected, >>>>> >>>>> but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode >>>>> setting" >>>>> >>>>> Anybody had this error before? >>>> Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. >>>> >>>> Do you have issues accessing your storage? >>>> >>>>> Best regards >>>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at lists.proxmox.com >>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From uwe.sauter.de at gmail.com Wed Nov 24 16:36:40 2021 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Wed, 24 Nov 2021 16:36:40 +0100 Subject: [PVE-User] OS Error after upgrade to 7.1 In-Reply-To: <801379f0-658e-ee56-770f-86df05eeed76@konzept-is.de> References: <8d0421f8-9c29-ddd4-fce2-068a2e9981c5@konzept-is.de> <737e7b21-85e0-f7a8-9dff-a6e0a3470951@konzept-is.de> <37bf83e6-a9f8-fa22-ae56-b9e704ea5003@gmail.com> <801379f0-658e-ee56-770f-86df05eeed76@konzept-is.de> Message-ID: <3781af86-dd35-4a35-0287-9514ef0e81be@gmail.com> Am 24.11.21 um 15:45 schrieb Ralf Storm: > Hello Stefan, > > I had the old Firmware (from2015?) 000..26 - the ri version not the it version - this did not work > on the 5.13 Kernel - stuck at boot and server rebooting after a while > > the newest version i found was from 2016 000..27 , which works, it is hard to find any firmwares as > the vendor does not supply it anymore, there are descriptions howto update and links to the > firmware, but the links are dead... > If you go to https://www.broadcom.com/support/download-search and then: Product Group -> Legacy Products (all the way down) Product Family -> Legacy Host Bus Adapters Product Name -> SAS 9211-8i Host Bus Adapter => Search you will find all firmware versions and related tools, docs, etc. that Broadcom shares. The latest available firmware is 20.00.07.00 dated 2016-04-04. > Has to be updated in efi shell from usb drive > > > hope that helps, > > best regards > > Ralf > > > Hi Ralf > > Would you mind to share the old and new firmware versions of your controller and your kernel version? > > It might help other PVE users to avoid the problem you have run into. > > Thank you > > Stefan > >> On Nov 24, 2021, at 11:24, Ralf Storm? wrote: >> >> Hello, >> >> >> Proxmox is starting again after upgrading the firmware of the 9211-8i controllers... >> >> And yes it did boot with the old Kernel, so the Firmware of the controllers was incompatible with >> thie new Kernel. >> >> >> Ralf Storm >> >> >> Systemadministrator >> >> >> >> Konzept Informationssysteme GmbH >> >> Am 23.11.2021 um 14:37 schrieb Uwe Sauter: >>> Am 23.11.21 um 14:34 schrieb Ralf Storm: >>>> Hello, >>>> >>>> >>>> yes - I have a problem, as mentioned in my first mail, the system is stuck at boot with this >>>> message >>>> and then after a few minutes reboots... >>> Can you reboot with the previous kernel and check the firmware of the controller? >>> >>> dmesg | grep mpt | grep -i fw >>> >>> should show the firmware version. >>> >>> >>>> Am 23/11/2021 um 14:14 schrieb Uwe Sauter: >>>>> Am 23.11.21 um 14:04 schrieb Ralf Storm: >>>>>> Hello, >>>>>> >>>>>> I?m currently upgrading my 7-node Cluster, 1 node without problems, 2 nodes had renamed NICs >>>>>> to be >>>>>> corrected, >>>>>> >>>>>> but the last Node stucks at boot with this message: "mpt2sas_cm0: overiding NVDATA EEDPTagMode >>>>>> setting" >>>>>> >>>>>> Anybody had this error before? >>>>> Yes, unrelated to Proxmox. But the SAS controller works fine even with this message. >>>>> >>>>> Do you have issues accessing your storage? >>>>> >>>>>> Best regards >>>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at lists.proxmox.com >>>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From wolf at wolfspyre.com Sun Nov 28 23:54:28 2021 From: wolf at wolfspyre.com (Wolf Noble) Date: Sun, 28 Nov 2021 16:54:28 -0600 Subject: [PVE-User] ARP issue In-Reply-To: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> References: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> Message-ID: <9961E5E0-A8EF-4B04-AC3A-017C42723916@wolfspyre.com> Hiya, IC! it's not obvious (to me) what problem you're trying to address. Generally, ip addresses aught be unique... having the same address on multiple hosts *IS POSSIBLE* but is (usually) a ReallyBadIdea(tm) I want to make sure I understand what you're trying to accomplish before just taking my (mis)understanding and running with it ;) Wolf Noble Hoof & Paw wolf at wolfspyre.com [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] On Nov 23, 2021, at 1:58 PM, ic wrote: Hi, I?m running PVE 7.0 on a bunch of servers. I noticed something strange. There is a vmbr2 containing one physical interface (ens19) with an IP (10.X.Y.Z/24). There is a vmbr1 containing NO physical interface with another IP (10.A.B.C/24) (outside of the range of vmbr2, even if this is irrelevant for this problem). Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). In an ?ip a? output, this mac address appears only in ens19 and vmbr2. vmbr1 has its own mac (different from the physical mac of ens19/vmbr2). What am I missing? In my setup I need vmbr2 to have the same IP on each physical host and not leak on the outside network so this is pretty annoying :( BR, ic _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From wolf at wolfspyre.com Sun Nov 28 23:54:28 2021 From: wolf at wolfspyre.com (Wolf Noble) Date: Sun, 28 Nov 2021 16:54:28 -0600 Subject: [PVE-User] ARP issue In-Reply-To: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> References: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> Message-ID: <9961E5E0-A8EF-4B04-AC3A-017C42723916@wolfspyre.com> Hiya, IC! it's not obvious (to me) what problem you're trying to address. Generally, ip addresses aught be unique... having the same address on multiple hosts *IS POSSIBLE* but is (usually) a ReallyBadIdea(tm) I want to make sure I understand what you're trying to accomplish before just taking my (mis)understanding and running with it ;) Wolf Noble Hoof & Paw wolf at wolfspyre.com [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] On Nov 23, 2021, at 1:58 PM, ic wrote: Hi, I?m running PVE 7.0 on a bunch of servers. I noticed something strange. There is a vmbr2 containing one physical interface (ens19) with an IP (10.X.Y.Z/24). There is a vmbr1 containing NO physical interface with another IP (10.A.B.C/24) (outside of the range of vmbr2, even if this is irrelevant for this problem). Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). In an ?ip a? output, this mac address appears only in ens19 and vmbr2. vmbr1 has its own mac (different from the physical mac of ens19/vmbr2). What am I missing? In my setup I need vmbr2 to have the same IP on each physical host and not leak on the outside network so this is pretty annoying :( BR, ic _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From abreuer1521 at gmail.com Tue Nov 30 04:36:48 2021 From: abreuer1521 at gmail.com (Eric Abreu) Date: Mon, 29 Nov 2021 22:36:48 -0500 Subject: [PVE-User] Where is ZFS encryption key in Proxmox 7.1 Message-ID: Hello everyone, I have created a ZFS pool from Proxmox 7.1 web interface with 2 SSDs in RAID 1. I noticed that everything works fine after I created the pool, and ZFS at REST encryption was also enabled. After rebooting the server it did not ask for a passphrase so my guess is that Proxmox is getting the key from somewhere in the file system. Anyone could help me find out where? Thanks in advance. From t.lamprecht at proxmox.com Tue Nov 30 09:37:35 2021 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 30 Nov 2021 09:37:35 +0100 Subject: [PVE-User] Where is ZFS encryption key in Proxmox 7.1 In-Reply-To: References: Message-ID: <5a879cf8-ed5a-783a-29a7-6d175b2605f7@proxmox.com> Hi, On 30.11.21 04:36, Eric Abreu wrote: > I have created a ZFS pool from Proxmox 7.1 web interface with 2 SSDs in > RAID 1. I noticed that everything works fine after I created the pool, and > ZFS at REST encryption was also enabled. After rebooting the server it did > not ask for a passphrase so my guess is that Proxmox is getting the key > from somewhere in the file system. Anyone could help me find out where? Well, how did you enable ZFS at rest encryption? As that is something that won't be done automatically, and the local-storage web-interface/api currently does not allow to configure that either. cheers, Thomas From stefan.radman at me.com Tue Nov 30 09:47:47 2021 From: stefan.radman at me.com (Stefan Radman) Date: Tue, 30 Nov 2021 09:47:47 +0100 Subject: [PVE-User] ARP issue In-Reply-To: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> References: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> Message-ID: Hi The only situation where I can imagine this happening is a PVE host that has Proxy ARP enabled. This is not enabled by default AFAIK. root at pve5:~# sysctl net.ipv4.conf.all.proxy_arp net.ipv4.conf.all.proxy_arp = 0 On my PVE7 hosts, all the kernel?s ARP features are in fact off (by default). root at pve5:~# sysctl -a | fgrep arp | grep -vc ' = 0$' 0 > Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). Even if Proxy ARP were configured on your PVE7 hosts, your servers would normally only get an ARP reply when they send a specific ARP request e.g. ?who has 10.A.B.C, tell 10.A.B.D?. Here are questions you should ask yourself: Why would your servers configured with IP 10.X.Y.Z and connected to vmbr2 via ens19 send ARP requests for 10.A.B.C/24 on that link? Why do they? How do the requests look like (who is asking)? Hope that helps in tracking it down. Stefan > On Nov 23, 2021, at 20:58, ic wrote: > > Hi, > > I?m running PVE 7.0 on a bunch of servers. I noticed something strange. > There is a vmbr2 containing one physical interface (ens19) with an IP (10.X.Y.Z/24). > There is a vmbr1 containing NO physical interface with another IP (10.A.B.C/24) (outside of the range of vmbr2, even if this is irrelevant for this problem). > > Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). > > In an ?ip a? output, this mac address appears only in ens19 and vmbr2. vmbr1 has its own mac (different from the physical mac of ens19/vmbr2). > > What am I missing? > > In my setup I need vmbr2 to have the same IP on each physical host and not leak on the outside network so this is pretty annoying :( > > BR, ic > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From stefan.radman at me.com Tue Nov 30 09:47:47 2021 From: stefan.radman at me.com (Stefan Radman) Date: Tue, 30 Nov 2021 09:47:47 +0100 Subject: [PVE-User] ARP issue In-Reply-To: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> References: <47BB1F6A-23C7-40EB-B046-A78F6D2FBB62@benappy.com> Message-ID: Hi The only situation where I can imagine this happening is a PVE host that has Proxy ARP enabled. This is not enabled by default AFAIK. root at pve5:~# sysctl net.ipv4.conf.all.proxy_arp net.ipv4.conf.all.proxy_arp = 0 On my PVE7 hosts, all the kernel?s ARP features are in fact off (by default). root at pve5:~# sysctl -a | fgrep arp | grep -vc ' = 0$' 0 > Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). Even if Proxy ARP were configured on your PVE7 hosts, your servers would normally only get an ARP reply when they send a specific ARP request e.g. ?who has 10.A.B.C, tell 10.A.B.D?. Here are questions you should ask yourself: Why would your servers configured with IP 10.X.Y.Z and connected to vmbr2 via ens19 send ARP requests for 10.A.B.C/24 on that link? Why do they? How do the requests look like (who is asking)? Hope that helps in tracking it down. Stefan > On Nov 23, 2021, at 20:58, ic wrote: > > Hi, > > I?m running PVE 7.0 on a bunch of servers. I noticed something strange. > There is a vmbr2 containing one physical interface (ens19) with an IP (10.X.Y.Z/24). > There is a vmbr1 containing NO physical interface with another IP (10.A.B.C/24) (outside of the range of vmbr2, even if this is irrelevant for this problem). > > Somehow, the other physical servers connected to ens19 get an ARP reply with the mac address of ens19 for the IP on vmbr1 (which, again, has no physical interface). > > In an ?ip a? output, this mac address appears only in ens19 and vmbr2. vmbr1 has its own mac (different from the physical mac of ens19/vmbr2). > > What am I missing? > > In my setup I need vmbr2 to have the same IP on each physical host and not leak on the outside network so this is pretty annoying :( > > BR, ic > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From abreuer1521 at gmail.com Tue Nov 30 18:17:05 2021 From: abreuer1521 at gmail.com (Eric Abreu) Date: Tue, 30 Nov 2021 12:17:05 -0500 Subject: [PVE-User] Where is ZFS encryption key in Proxmox 7.1 In-Reply-To: <5a879cf8-ed5a-783a-29a7-6d175b2605f7@proxmox.com> References: <5a879cf8-ed5a-783a-29a7-6d175b2605f7@proxmox.com> Message-ID: Hi Thomas, Thanks for the quick response. I'm going to repeat the steps to create the ZFS pool from the web interface and paste them here. I'm pretty sure I did everything from the dashboard and the encryption was enabled by default. I'll keep you posted. Thanks again for your help. On Tue, Nov 30, 2021 at 3:37 AM Thomas Lamprecht wrote: > Hi, > > On 30.11.21 04:36, Eric Abreu wrote: > > I have created a ZFS pool from Proxmox 7.1 web interface with 2 SSDs in > > RAID 1. I noticed that everything works fine after I created the pool, > and > > ZFS at REST encryption was also enabled. After rebooting the server it > did > > not ask for a passphrase so my guess is that Proxmox is getting the key > > from somewhere in the file system. Anyone could help me find out where? > > Well, how did you enable ZFS at rest encryption? As that is something that > won't > be done automatically, and the local-storage web-interface/api currently > does not > allow to configure that either. > > cheers, > Thomas > >