From ralvarado at anycast.cl Tue Oct 1 04:08:05 2019 From: ralvarado at anycast.cl (Roberto Alvarado) Date: Tue, 1 Oct 2019 02:08:05 +0000 Subject: [PVE-User] upgrade smartmon tools? Message-ID: <0100016d85131dda-51407933-af11-4556-bd42-6682f578efbe-000000@email.amazonses.com> Hi Folks, Do you know what is the best way to upgrade the smartmon tools package to version 7? I have some problems with nvme units and smartmont 6.x, but with the 7 version all works without problem, but some proxmox packages depends on smartmon and I dont want to create a problem with this update. Thanks! Regards Roberto From t.lamprecht at proxmox.com Tue Oct 1 08:19:13 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 1 Oct 2019 08:19:13 +0200 Subject: [PVE-User] upgrade smartmon tools? In-Reply-To: <0100016d85131dda-51407933-af11-4556-bd42-6682f578efbe-000000@email.amazonses.com> References: <0100016d85131dda-51407933-af11-4556-bd42-6682f578efbe-000000@email.amazonses.com> Message-ID: <15c864a6-3254-5347-4ff7-16c86e95341f@proxmox.com> Hi, On 10/1/19 4:08 AM, Roberto Alvarado wrote: > Hi Folks, > > Do you know what is the best way to upgrade the smartmon tools package to version 7? > just upgrade to Proxmox VE 6.x, it has smartmontools 7: # apt show smartmontools Package: smartmontools Version: 7.0-pve2 For now we have no plans to backport it to 5.4. > I have some problems with nvme units and smartmont 6.x, but with the 7 version all works without problem, but some proxmox packages depends on smartmon and I dont want to create a problem with this update. You could re-build it yourself from our git[0] if you really cannot upgrade and ain't scared from installing some dev packages and executing some `make` :) # git clone git://git.proxmox.com/git/smartmontools.git # cd smartmontools # make submodule # make deb [0]: https://git.proxmox.com/?p=smartmontools.git;a=summary cheers, Thomas From chris.hofstaedtler at deduktiva.com Wed Oct 2 00:07:21 2019 From: chris.hofstaedtler at deduktiva.com (Chris Hofstaedtler | Deduktiva) Date: Wed, 2 Oct 2019 00:07:21 +0200 Subject: [PVE-User] Kernel Memory Leak on PVE6? In-Reply-To: <20190920123117.bn5eydbjsmb7tfyl@zeha.at> References: <20190920123117.bn5eydbjsmb7tfyl@zeha.at> Message-ID: <20191001220721.gnrtn573bgzs5whr@percival.namespace.at> * Chris Hofstaedtler | Deduktiva [190920 14:31]: > I'm seeing a very interesting problem on PVE6: one of our machines > appears to leak kernel memory over time, up to the point where only > a reboot helps. Shutting down all KVM VMs does not release this > memory. [..] > root at vn03:~# uname -a > Linux vn03 5.0.21-1-pve #1 SMP PVE 5.0.21-1 (Tue, 20 Aug 2019 17:16:32 +0200) x86_64 GNU/Linux I've upgraded both machines yesterday to: Linux vn03 5.0.21-2-pve #1 SMP PVE 5.0.21-6 (Fri, 27 Sep 2019 17:17:02 +0200) x86_64 GNU/Linux And they seem to be doing fine for now. Also, the slab size is a lot smaller just after boot, compared to previous reboots. I'm pretty sure pve-kernel-5.0.21-2-pve:amd64 5.0.21-3 still had the problem. -- Chris Hofstaedtler / Deduktiva GmbH (FN 418592 b, HG Wien) www.deduktiva.com / +43 1 353 1707 From chris.hofstaedtler at deduktiva.com Wed Oct 2 15:54:05 2019 From: chris.hofstaedtler at deduktiva.com (Chris Hofstaedtler | Deduktiva) Date: Wed, 2 Oct 2019 15:54:05 +0200 Subject: [PVE-User] Kernel Memory Leak on PVE6? In-Reply-To: <20191001220721.gnrtn573bgzs5whr@percival.namespace.at> References: <20190920123117.bn5eydbjsmb7tfyl@zeha.at> <20191001220721.gnrtn573bgzs5whr@percival.namespace.at> Message-ID: <20191002135404.lsv2q5ttva3t3alo@zeha.at> Replying to myself once more, mostly for the benefit of people following along at home... * Chris Hofstaedtler | Deduktiva [191002 00:07]: > > root at vn03:~# uname -a > > Linux vn03 5.0.21-1-pve #1 SMP PVE 5.0.21-1 (Tue, 20 Aug 2019 17:16:32 +0200) x86_64 GNU/Linux > > I've upgraded both machines yesterday to: > Linux vn03 5.0.21-2-pve #1 SMP PVE 5.0.21-6 (Fri, 27 Sep 2019 17:17:02 +0200) x86_64 GNU/Linux > > And they seem to be doing fine for now. Also, the slab size is a lot > smaller just after boot, compared to previous reboots. So, that helped, but it's not the entire story. For anyone affected: - Do you have ipmitool, libfreeipmi17, freeipmi-common installed? - Are you running check_mk agent or another tool polling ipmitool data? - Can you try uninstalling ipmitool libfreeipmi17 freeipmi-common to see if the problem goes away? Plus, if you happen to have check_mk and can't see details in your memory allocation graphs, you can add an extra "slab only" graph by editing: /opt/omd/versions/1.5.0p22.cre/share/check_mk/pnp-templates/check_mk-mem.linux.php and adding: $opt[] = $defopt . "--title \"Slab only\""; $def[] = "" . mem_area("slab", "af91eb", "Slab (Various smaller caches)", FALSE) ; starting at line 95. Chris -- Chris Hofstaedtler / Deduktiva GmbH (FN 418592 b, HG Wien) www.deduktiva.com / +43 1 353 1707 From s.ivanov at proxmox.com Wed Oct 2 16:20:21 2019 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Wed, 2 Oct 2019 16:20:21 +0200 Subject: [PVE-User] Kernel Memory Leak on PVE6? In-Reply-To: <20191002135404.lsv2q5ttva3t3alo@zeha.at> References: <20190920123117.bn5eydbjsmb7tfyl@zeha.at> <20191001220721.gnrtn573bgzs5whr@percival.namespace.at> <20191002135404.lsv2q5ttva3t3alo@zeha.at> Message-ID: <20191002162021.6f1873a7@rosa.proxmox.com> hi, On Wed, 2 Oct 2019 15:54:05 +0200 Chris Hofstaedtler | Deduktiva wrote: > Replying to myself once more, mostly for the benefit of people > following along at home... > > * Chris Hofstaedtler | Deduktiva [191002 00:07]: > > > root at vn03:~# uname -a > > > Linux vn03 5.0.21-1-pve #1 SMP PVE 5.0.21-1 (Tue, 20 Aug 2019 17:16:32 +0200) x86_64 GNU/Linux > > > > I've upgraded both machines yesterday to: > > Linux vn03 5.0.21-2-pve #1 SMP PVE 5.0.21-6 (Fri, 27 Sep 2019 17:17:02 +0200) x86_64 GNU/Linux > > > > And they seem to be doing fine for now. Also, the slab size is a lot > > smaller just after boot, compared to previous reboots. > > So, that helped, but it's not the entire story. > > For anyone affected: > - Do you have ipmitool, libfreeipmi17, freeipmi-common installed? > - Are you running check_mk agent or another tool polling ipmitool > data? nice catch!! It'd seem one other cases (e.g. [0]) also have check_mk_agent running (otoh one reproducer has this on a KVM-Guest). In any case I'll share that as potential source! Thanks! [0] https://forum.proxmox.com/threads/pve6-slab-cache-grows-until-vms-start-to-crash.58307/ > - Can you try uninstalling ipmitool libfreeipmi17 freeipmi-common > to see if the problem goes away? > > Plus, if you happen to have check_mk and can't see details in your > memory allocation graphs, you can add an extra "slab only" graph > by editing: > /opt/omd/versions/1.5.0p22.cre/share/check_mk/pnp-templates/check_mk-mem.linux.php > and adding: > $opt[] = $defopt . "--title \"Slab only\""; > $def[] = "" > . mem_area("slab", "af91eb", "Slab (Various smaller caches)", FALSE) > ; > starting at line 95. > > Chris > From herve.ballans at ias.u-psud.fr Wed Oct 2 18:09:13 2019 From: herve.ballans at ias.u-psud.fr (=?UTF-8?Q?Herv=c3=a9_Ballans?=) Date: Wed, 2 Oct 2019 18:09:13 +0200 Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> Message-ID: <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> Hi Alexandre, We encouter exactly the same problem as Laurent Caron (after upgrade from 5 to 6). So I tried your patch 3 days ago, but unfortunately, the problem still occurs... This is a really annoying problem, since sometimes, all the PVE nodes of our cluster reboot quasi-simultaneously ! And in the same time, we don't encounter this problem with our other PVE cluster in version 5. (And obviously we are waiting for a solution and a stable situation before upgrade it !) It seems to be a unicast or corosync3 problem, but logs are not really verbose at the time of reboot... Is there anything else to test ? Regards, Herv? Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit?: > Hi, > > a patch is available in pvetest > > http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb > > can you test it ? > > (you need to restart corosync after install of the deb) > > > ----- Mail original ----- > De: "Laurent CARON" > ?: "proxmoxve" > Envoy?: Lundi 16 Septembre 2019 09:55:34 > Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 > > Hi, > > > After upgrading our 4 node cluster from PVE 5 to 6, we experience > constant crashed (once every 2 days). > > Those crashes seem related to corosync. > > Since numerous users are reporting sych issues (broken cluster after > upgrade, unstabilities, ...) I wonder if it is possible to downgrade > corosync to version 2.4.4 without impacting functionnality ? > > Basic steps would be: > > On all nodes > > # systemctl stop pve-ha-lrm > > Once done, on all nodes: > > # systemctl stop pve-ha-crm > > Once done, on all nodes: > > # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 > libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 > libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 > > Then, once corosync has been downgraded, on all nodes > > # systemctl start pve-ha-lrm > # systemctl start pve-ha-crm > > Would that work ? > > Thanks > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Fri Oct 4 00:28:58 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 3 Oct 2019 19:28:58 -0300 Subject: [PVE-User] Face problem with cluster: CS_ERR_BAD_HANDLE Message-ID: Hi there I have a 2 node cluster, that was work fine! Then when I add the second node, I get this error: CS_ERR_BAD_HANDLE (or something similar!) when try to use pvecm status I try to remove the node, delete the cluster, but nothing good! My solution was reinstall everything again.... But there's something more I could do in order to recover the cluster?? Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From iztok.gregori at elettra.eu Fri Oct 4 15:43:35 2019 From: iztok.gregori at elettra.eu (Iztok Gregori) Date: Fri, 4 Oct 2019 15:43:35 +0200 Subject: [PVE-User] Options add Cloud-init drive greyed out Message-ID: Hi to all! We have a user which have the following roles/path: PVEAuditor /nodes Administrator /pool/ PVEDatastoreAdmin /storage When he try to add a cloud-init drive (hardware->add->CloudInit) the CloudInit option is greyed out. I think we he is missing same permissions (as full admin the option is usable) but we are not able to figure which one. Can anybody give us a hint? Thanks! I. -- Iztok Gregori ICT Systems and Services Elettra - Sincrotrone Trieste S.C.p.A. Telephone: +39 040 3758948 http://www.elettra.eu From jmr.richardson at gmail.com Thu Oct 10 17:47:37 2019 From: jmr.richardson at gmail.com (JR Richardson) Date: Thu, 10 Oct 2019 10:47:37 -0500 Subject: [PVE-User] Trouble Creating CephFS Message-ID: Hi All, I'm testing ceph in the lab. I constructed a 3 node proxmox cluster with latest 5.4-13 PVE all updates done yesterday and used the tutorials to create ceph cluster, added monitors on each node, added 9 OSDs, 3 disks per ceph cluster node, ceph status OK. >From the GUI of the ceph cluster, when I go to CephFS, I can create MSDs, 1 per node OK, but their state is up:standby. When I try to create a CephFS, I get timeout error. But when I check Pools, 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status health_ok. I copied over keyring and I can attach this to an external PVE as RBD storage but I don't get a path parameter so the ceph storage will only allow for raw disk images. If I try to attatch as CephFS, the content does allow for Disk Image. I need the ceph cluster to export the cephfs so I can attach and copy over qcow2 images. I can create new disk and spin up VMs within the ceph storage pool. Because I can attach and use the ceph pool, I'm guessing it is considered Block storage, hence the raw disk only creation for VM HDs. How do I setup the ceph to export the pool as file based? I came across this bug: https://bugzilla.proxmox.com/show_bug.cgi?id=2108 I'm not sure it applies but sounds similar to what I'm seeing. It's certainly possible I'm doing something wrong but I'm not seeing how to get this working as needed. Any guidance is appreciated . Thanks. JR -- JR Richardson Engineering for the Masses Chasing the Azeotrope From t.lamprecht at proxmox.com Fri Oct 11 12:50:03 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 11 Oct 2019 12:50:03 +0200 Subject: [PVE-User] Trouble Creating CephFS In-Reply-To: References: Message-ID: <31bc80be-ccd8-aef7-f679-93850fd2deae@proxmox.com> Hi, On 10/10/19 5:47 PM, JR Richardson wrote: > Hi All, > > I'm testing ceph in the lab. I constructed a 3 node proxmox cluster > with latest 5.4-13 PVE all updates done yesterday and used the > tutorials to create ceph cluster, added monitors on each node, added 9 > OSDs, 3 disks per ceph cluster node, ceph status OK. Just to be sure: you did all that using the PVE Webinterface? Which tutorials do you mean? Why not with 6.0? Starting out with Nautilus now will safe you one major ceph (and PVE) upgrade. > > From the GUI of the ceph cluster, when I go to CephFS, I can create > MSDs, 1 per node OK, but their state is up:standby. When I try to > create a CephFS, I get timeout error. But when I check Pools, > 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status > health_ok. Hmm, so no MDSs gets up and ready into the active state... Was "cephfs_metadata" also created? You could check out the # ceph fs status # ceph mds stat > > I copied over keyring and I can attach this to an external PVE as RBD > storage but I don't get a path parameter so the ceph storage will only > allow for raw disk images. If I try to attatch as CephFS, the content > does allow for Disk Image. I need the ceph cluster to export the > cephfs so I can attach and copy over qcow2 images. I can create new > disk and spin up VMs within the ceph storage pool. Because I can > attach and use the ceph pool, I'm guessing it is considered Block > storage, hence the raw disk only creation for VM HDs. How do I setup > the ceph to export the pool as file based? with either CephFS or, to be techincally complete, by creating an FS on a Rados Block Device (RBD). > > I came across this bug: > https://bugzilla.proxmox.com/show_bug.cgi?id=2108 > > I'm not sure it applies but sounds similar to what I'm seeing. It's I realyl think that this exact bug cannot apply to you if you run with, 5.4.. if you did not see any: > mon_command failed - error parsing integer value '': Expected option value to be integer, got ''in"} errors in the log this it cannot be this bug. Not saying that it cannot possibly be a bug, but not this one, IMO. cheers, Thomas From jmr.richardson at gmail.com Fri Oct 11 14:07:13 2019 From: jmr.richardson at gmail.com (JR Richardson) Date: Fri, 11 Oct 2019 07:07:13 -0500 Subject: [PVE-User] Trouble Creating CephFS In-Reply-To: <31bc80be-ccd8-aef7-f679-93850fd2deae@proxmox.com> References: <31bc80be-ccd8-aef7-f679-93850fd2deae@proxmox.com> Message-ID: On Fri, Oct 11, 2019 at 5:50 AM Thomas Lamprecht wrote: > > Hi, > > On 10/10/19 5:47 PM, JR Richardson wrote: > > Hi All, > > > > I'm testing ceph in the lab. I constructed a 3 node proxmox cluster > > with latest 5.4-13 PVE all updates done yesterday and used the > > tutorials to create ceph cluster, added monitors on each node, added 9 > > OSDs, 3 disks per ceph cluster node, ceph status OK. > > Just to be sure: you did all that using the PVE Webinterface? Which tutorials > do you mean? Why not with 6.0? Starting out with Nautilus now will safe you > one major ceph (and PVE) upgrade. Yes, all configured through web interface. Tutorials: https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes https://www.youtube.com/watch?v=jFFLINtNnXs https://www.youtube.com/watch?v=0t1UiOg6UoE And a few other videos and howto's from random folks. Honestly, I did not consider using 6.0, I read some posts about cluster nodes randomly rebooting after upgrade to 6.0 and I use 5.4 in production. I'll redo my lab with 6.0 and see how it goes. > > > > > From the GUI of the ceph cluster, when I go to CephFS, I can create > > MSDs, 1 per node OK, but their state is up:standby. When I try to > > create a CephFS, I get timeout error. But when I check Pools, > > 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status > > health_ok. > > Hmm, so no MDSs gets up and ready into the active state... > Was "cephfs_metadata" also created? > > You could check out the > # ceph fs status > # ceph mds stat > root at cephclust1:~# ceph fs status +-------------+ | Standby MDS | +-------------+ | cephclust3 | | cephclust1 | | cephclust2 | +-------------+ MDS version: ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable) root at cephclust1:~# ceph mds stat , 3 up:standby I was looking into MDS and why it was in standby instead of active, but I didn't get far, is this could be the issue? I don't show any cephfs_metadata pools created, only cephfs_data was created. > > > > > I copied over keyring and I can attach this to an external PVE as RBD > > storage but I don't get a path parameter so the ceph storage will only > > allow for raw disk images. If I try to attatch as CephFS, the content > > does allow for Disk Image. I need the ceph cluster to export the > > cephfs so I can attach and copy over qcow2 images. I can create new > > disk and spin up VMs within the ceph storage pool. Because I can > > attach and use the ceph pool, I'm guessing it is considered Block > > storage, hence the raw disk only creation for VM HDs. How do I setup > > the ceph to export the pool as file based? > > with either CephFS or, to be techincally complete, by creating an FS on > a Rados Block Device (RBD). > > > > > I came across this bug: > > https://bugzilla.proxmox.com/show_bug.cgi?id=2108 > > > > I'm not sure it applies but sounds similar to what I'm seeing. It's > > I realyl think that this exact bug cannot apply to you if you run with, > 5.4.. if you did not see any: > > > mon_command failed - error parsing integer value '': Expected option value to be integer, got ''in"} > > errors in the log this it cannot be this bug. Not saying that it cannot > possibly be a bug, but not this one, IMO. No log errors like that so probably not that bug. > > cheers, > Thomas > Thanks. JR -- JR Richardson Engineering for the Masses Chasing the Azeotrope From jmr.richardson at gmail.com Sat Oct 12 14:53:43 2019 From: jmr.richardson at gmail.com (JR Richardson) Date: Sat, 12 Oct 2019 07:53:43 -0500 Subject: [PVE-User] Trouble Creating CephFS UPDATE Message-ID: <000001d580fc$14bb22a0$3e3167e0$@gmail.com> OK Folk's, I spent a good time in the lab testing cephfs creation on a 3 node proxmox cluster in both 5.4 (latest updates) and 6.0 (latest updates). On both versions, from the GUI, creating a cephfs does not work. I get a timeout error, but the cephfs_data pool does get created, cephfs_metadata pool does not get created and the cephfs MDS does not come active. I can create cephfs_metadata pool from command line OK and it shows up in the GUI. Once cephfs_metadata is created manually, the MDS server become active and I can mount cephfs. Now the other thing is mounting cephfs from another proxmox cluster within the GUI, the only option is to mount it for VZDump, ISO, Template and Snippets, not for disk images, which is really what I need. So what I have to do is this: On the 3 node ceph cluster from the PVE GUI Install Ceph on 3 nodes Create 3 monitors Create 3 MDS Add the OSDs Switch to the command line on one of the nodes # ceph osd pool application enable cephfs_metadata cephfs # ceph fs new cephfs cephfs_metadata cephfs On the external PVE Cluster command line: SCP over the ceph keyring to /etc/pve/priv/cephfs.keyring Edit the keyring file to only have the key, nothing else # mkdir /mnt/mycephfs # mount -t ceph [IP of MDS SERVER]:/ /mnt/mycephfs -o name=admin,secretfile=/etc/pve/ceph/cephfs.secret Switch back to the GUI and add Directoy Storage This will allow adding cephfs and allow for disk image storage But this is a bottle neck and I don?t think the proper way to accomplish it. Sharing a directory file store across a cluster? So I'm still looking for help to get this working correctly. Thanks. JR JR Richardson Engineering for the Masses Chasing the Azeotrope -----Original Message----- From: JR Richardson Sent: Friday, October 11, 2019 7:07 AM To: Thomas Lamprecht Cc: PVE User List Subject: Re: [PVE-User] Trouble Creating CephFS On Fri, Oct 11, 2019 at 5:50 AM Thomas Lamprecht wrote: > > Hi, > > On 10/10/19 5:47 PM, JR Richardson wrote: > > Hi All, > > > > I'm testing ceph in the lab. I constructed a 3 node proxmox cluster > > with latest 5.4-13 PVE all updates done yesterday and used the > > tutorials to create ceph cluster, added monitors on each node, added > > 9 OSDs, 3 disks per ceph cluster node, ceph status OK. > > Just to be sure: you did all that using the PVE Webinterface? Which > tutorials do you mean? Why not with 6.0? Starting out with Nautilus > now will safe you one major ceph (and PVE) upgrade. Yes, all configured through web interface. Tutorials: https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes https://www.youtube.com/watch?v=jFFLINtNnXs https://www.youtube.com/watch?v=0t1UiOg6UoE And a few other videos and howto's from random folks. Honestly, I did not consider using 6.0, I read some posts about cluster nodes randomly rebooting after upgrade to 6.0 and I use 5.4 in production. I'll redo my lab with 6.0 and see how it goes. > > > > > From the GUI of the ceph cluster, when I go to CephFS, I can create > > MSDs, 1 per node OK, but their state is up:standby. When I try to > > create a CephFS, I get timeout error. But when I check Pools, > > 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status > > health_ok. > > Hmm, so no MDSs gets up and ready into the active state... > Was "cephfs_metadata" also created? > > You could check out the > # ceph fs status > # ceph mds stat > root at cephclust1:~# ceph fs status +-------------+ | Standby MDS | +-------------+ | cephclust3 | | cephclust1 | | cephclust2 | +-------------+ MDS version: ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable) root at cephclust1:~# ceph mds stat , 3 up:standby I was looking into MDS and why it was in standby instead of active, but I didn't get far, is this could be the issue? I don't show any cephfs_metadata pools created, only cephfs_data was created. > > > > > I copied over keyring and I can attach this to an external PVE as > > RBD storage but I don't get a path parameter so the ceph storage > > will only allow for raw disk images. If I try to attatch as CephFS, > > the content does allow for Disk Image. I need the ceph cluster to > > export the cephfs so I can attach and copy over qcow2 images. I can > > create new disk and spin up VMs within the ceph storage pool. > > Because I can attach and use the ceph pool, I'm guessing it is > > considered Block storage, hence the raw disk only creation for VM > > HDs. How do I setup the ceph to export the pool as file based? > > with either CephFS or, to be techincally complete, by creating an FS > on a Rados Block Device (RBD). > > > > > I came across this bug: > > https://bugzilla.proxmox.com/show_bug.cgi?id=2108 > > > > I'm not sure it applies but sounds similar to what I'm seeing. It's > > I realyl think that this exact bug cannot apply to you if you run > with, 5.4.. if you did not see any: > > > mon_command failed - error parsing integer value '': Expected > > option value to be integer, got ''in"} > > errors in the log this it cannot be this bug. Not saying that it > cannot possibly be a bug, but not this one, IMO. No log errors like that so probably not that bug. > > cheers, > Thomas > Thanks. JR -- JR Richardson Engineering for the Masses Chasing the Azeotrope From lae at lae.is Sat Oct 12 20:25:03 2019 From: lae at lae.is (Musee Ullah) Date: Sat, 12 Oct 2019 11:25:03 -0700 Subject: Proxmox VE 6.x deployments with Ansible In-Reply-To: References: Message-ID: <6e05b81d-8585-ac37-8871-0cf8a0d1c41f@lae.is> v1.6.2 has been cut. You can install the role using: ansible-galaxy install lae.proxmox,v1.6.2 The full release notes can be found on Github [0], but the following bit is important: Support for the following role variables will be removed in the next minor version (1.7.0): |pve_cluster_ring0_addr| |pve_cluster_ring1_addr| |pve_cluster_bindnet0_addr| |pve_cluster_bindnet1_addr| |pve_cluster_link0_addr| |pve_cluster_link1_addr| If you are overriding any of these, please update your playbooks to use |pve_cluster_addr0| and/or |pve_cluster_addr1| now. Ceph testers are still needed - namely to come across issues like #73 [1] and help resolve them (as I personally do not have resources to test physical PVE deployments anymore) and flesh out the documentation [2]. Side note, but if any of you are looking for some problems to solve for Hacktoberfest, there are a few feature requests on the tracker you could check out [3]. [0] https://github.com/lae/ansible-role-proxmox/releases/tag/v1.6.2 [1] https://github.com/lae/ansible-role-proxmox/issues/73 [2] https://github.com/lae/ansible-role-proxmox/issues/68 [3] https://github.com/lae/ansible-role-proxmox/issues -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: OpenPGP digital signature URL: From d.csapak at proxmox.com Mon Oct 14 08:51:38 2019 From: d.csapak at proxmox.com (Dominik Csapak) Date: Mon, 14 Oct 2019 08:51:38 +0200 Subject: [PVE-User] Trouble Creating CephFS UPDATE In-Reply-To: <000001d580fc$14bb22a0$3e3167e0$@gmail.com> References: <000001d580fc$14bb22a0$3e3167e0$@gmail.com> Message-ID: hi, just fyi, i just tested this on pve 6 (on current packages) via gui: 1) install 3 or more pve hosts 2) cluster them 3) install/init ceph on all of them 4) create 3 mons 5) create at least 1 manager 6) add osds 7) add one mds 8) create ceph fs -> works so i guess you are either missing something, or something in your network/setup is not working correctly kind regards dominik From mike at oeg.com.au Tue Oct 15 10:19:55 2019 From: mike at oeg.com.au (Mike O'Connor) Date: Tue, 15 Oct 2019 18:49:55 +1030 Subject: [PVE-User] Trouble Creating CephFS UPDATE In-Reply-To: References: <000001d580fc$14bb22a0$3e3167e0$@gmail.com> Message-ID: Hi > > so i guess you are either missing something, > or something in your network/setup is not working correctly Same here just finished a from scratch rebuild v4 to v6 and had no problems using the GUI. Going to need more details form the CEPH cluster for someone to work this out. Cheers Mike From adamw at matrixscience.com Tue Oct 15 11:58:41 2019 From: adamw at matrixscience.com (Adam Weremczuk) Date: Tue, 15 Oct 2019 10:58:41 +0100 Subject: [PVE-User] running Debian 10 containers in PVE 5.4 Message-ID: Hello, I'm running PVE 5.4-13 (Debian 9.11 based) using free no-subscription repos. Recently I've deployed a few Debian 10.0 containers which I later upgraded to 10.1. I'm having constant issues with these CTs such as delayed start and console availability (up to 15 minutes), unexpected network disconnections etc. No such issues for Debian 9.x containers. Is running Debian 10.x over 9.11 officially supported? Will switching to paid community repositories greatly improve my experience? Thanks, Adam From jmr.richardson at gmail.com Tue Oct 15 12:25:00 2019 From: jmr.richardson at gmail.com (JR Richardson) Date: Tue, 15 Oct 2019 05:25:00 -0500 Subject: [PVE-User] Trouble Creating CephFS UPDATE Message-ID: <000501d58342$cd5a8740$680f95c0$@gmail.com> Hi > > so i guess you are either missing something, or something in your > network/setup is not working correctly Same here just finished a from scratch rebuild v4 to v6 and had no problems using the GUI. Going to need more details form the CEPH cluster for someone to work this out. Cheers Mike Hey Guys, Really appreciate the feedback. I'm guessing either my network or hardware was bottlenecking me on the lab setup. When I started performance testing, my disk throughput was awful slow and almost unusable. I need to acquire some newer hardware and retest. Thanks. JR From t.lamprecht at proxmox.com Tue Oct 15 13:01:39 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 15 Oct 2019 13:01:39 +0200 Subject: [PVE-User] running Debian 10 containers in PVE 5.4 In-Reply-To: References: Message-ID: <69916624-a1b1-d274-220b-6298cac56152@proxmox.com> Hi, On 10/15/19 11:58 AM, Adam Weremczuk wrote: > Hello, > > I'm running PVE 5.4-13 (Debian 9.11 based) using free no-subscription repos. > > Recently I've deployed a few Debian 10.0 containers which I later upgraded to 10.1. > > I'm having constant issues with these CTs such as delayed start and console availability (up to 15 minutes), unexpected network disconnections etc. > > No such issues for Debian 9.x containers. > > Is running Debian 10.x over 9.11 officially supported? Somewhat, but the relative new systemd inside Debian 10 and other newer distros is not always that compatible with current Container Environments. As a starter I'd enable the "nesting" feature in the CTs options, it should help for quite a few issues by allowing systemd to setup it's own cgroups in the CT. This option can only be done as root, and while it has no real security implications it still exposes more of the host in the CT. There's some work underway to improve the CT experience with newer distribution versions, but that will need still quite a bit of time and will only become available on the PVE 6.X series, AFAICT. > > Will switching to paid community repositories greatly improve my experience? No, while the enterprise version is surely more stable it has not more features than the community ones. cheers, Thomas From nada at verdnatura.es Tue Oct 15 14:44:12 2019 From: nada at verdnatura.es (nada at verdnatura.es) Date: Tue, 15 Oct 2019 14:44:12 +0200 Subject: [PVE-User] running Debian 10 containers in PVE 5.4 In-Reply-To: References: Message-ID: hi Adam we are also running old Debian 9 (PVE 5.3-9 and 5.3-5) and after dist-upgrade of CTs from Stretch to Buster i had to convert majority of them to unprivileged with nesting they appear to be ok since that conversion (they are running more than a month ;-) planning to upgrade PVE to PVE 6 before Xmas... Nada El 2019-10-15 11:58, Adam Weremczuk escribi?: > Hello, > > I'm running PVE 5.4-13 (Debian 9.11 based) using free no-subscription > repos. > > Recently I've deployed a few Debian 10.0 containers which I later > upgraded to 10.1. > > I'm having constant issues with these CTs such as delayed start and > console availability (up to 15 minutes), unexpected network > disconnections etc. > > No such issues for Debian 9.x containers. > > Is running Debian 10.x over 9.11 officially supported? > > Will switching to paid community repositories greatly improve my > experience? > > Thanks, > Adam > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From adamw at matrixscience.com Tue Oct 15 16:30:52 2019 From: adamw at matrixscience.com (Adam Weremczuk) Date: Tue, 15 Oct 2019 15:30:52 +0100 Subject: [PVE-User] running Debian 10 containers in PVE 5.4 In-Reply-To: References: Message-ID: <2b87c145-c118-575a-cd25-16191f00ef0e@matrixscience.com> I ended up upgrading Debian and PVE on both my cluster nodes. I followed: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 and it went pretty well. Only one (hopefully minor) issue observed so far which I'm going to raise in a separate post. On 15/10/19 13:44, nada at verdnatura.es wrote: > hi Adam > we are also running old Debian 9 (PVE 5.3-9 and 5.3-5) > and after dist-upgrade of CTs from Stretch to Buster > i had to convert majority of them to unprivileged with nesting > they appear to be ok since that conversion > (they are running more than a month ;-) > planning to upgrade PVE to PVE 6 before Xmas... > Nada > > El 2019-10-15 11:58, Adam Weremczuk escribi?: >> Hello, >> >> I'm running PVE 5.4-13 (Debian 9.11 based) using free no-subscription >> repos. >> >> Recently I've deployed a few Debian 10.0 containers which I later >> upgraded to 10.1. >> >> I'm having constant issues with these CTs such as delayed start and >> console availability (up to 15 minutes), unexpected network >> disconnections etc. >> >> No such issues for Debian 9.x containers. >> >> Is running Debian 10.x over 9.11 officially supported? >> >> Will switching to paid community repositories greatly improve my >> experience? >> >> Thanks, >> Adam >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From adamw at matrixscience.com Tue Oct 15 16:43:45 2019 From: adamw at matrixscience.com (Adam Weremczuk) Date: Tue, 15 Oct 2019 15:43:45 +0100 Subject: [PVE-User] GPG signature error running pveam update Message-ID: <8ee596f8-e6cc-22d0-f30a-1dae5567521b@matrixscience.com> Hi all, It started failing following Debian 9->10 and PVE 5->6 upgrade: pveam update update failed - see /var/log/pveam.log for details "apt-key list" wasn't showing it so I've added it: wget https://github.com/turnkeylinux/turnkey-keyring/raw/master/turnkey-release-keyring.gpg apt-key add turnkey-release-keyring.gpg OK It's now listed and looks ok at the first glance: /etc/apt/trusted.gpg -------------------- pub?? rsa2048 2008-08-15 [SC] [expires: 2023-08-12] ????? 694C FF26 795A 29BA E07B? 4EB5 85C2 5E95 A16E B94D uid?????????? [ unknown] Turnkey Linux Release Key The errors in "pveam update" and pveam.log haven't gone away though: 2019-10-15 15:34:31 starting update 2019-10-15 15:34:31 start download http://download.proxmox.com/images/aplinfo-pve-6.dat.asc 2019-10-15 15:34:31 download finished: 200 OK 2019-10-15 15:34:31 start download http://download.proxmox.com/images/aplinfo-pve-6.dat.gz 2019-10-15 15:34:31 download finished: 200 OK 2019-10-15 15:34:31 signature verification: gpgv: Signature made Fri Sep 27 14:53:26 2019 BST 2019-10-15 15:34:31 signature verification: gpgv: using RSA key 353479F83781D7F8ED5F5AC57BF2812E8A6E88E0 2019-10-15 15:34:31 signature verification: gpgv: Can't check signature: No public key 2019-10-15 15:34:31 unable to verify signature - command '/usr/bin/gpgv -q --keyring /usr/share/doc/pve-manager/trustedkeys.gpg /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.31480.asc /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.31480' failed: exit code 2 2019-10-15 15:34:31 start download https://releases.turnkeylinux.org/pve/aplinfo.dat.asc 2019-10-15 15:34:31 download finished: 200 OK 2019-10-15 15:34:31 start download https://releases.turnkeylinux.org/pve/aplinfo.dat.gz 2019-10-15 15:34:32 download finished: 200 OK 2019-10-15 15:34:32 signature verification: gpgv: Signature made Sun Aug? 4 08:49:59 2019 BST 2019-10-15 15:34:32 signature verification: gpgv: using RSA key 694CFF26795A29BAE07B4EB585C25E95A16EB94D 2019-10-15 15:34:32 signature verification: gpgv: Good signature from "Turnkey Linux Release Key " 2019-10-15 15:34:32 update successful Am I doing something wrong? Or shall I ignore the error? Thanks, Adam From t.lamprecht at proxmox.com Wed Oct 16 08:02:11 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Wed, 16 Oct 2019 08:02:11 +0200 Subject: [PVE-User] GPG signature error running pveam update In-Reply-To: <8ee596f8-e6cc-22d0-f30a-1dae5567521b@matrixscience.com> References: <8ee596f8-e6cc-22d0-f30a-1dae5567521b@matrixscience.com> Message-ID: Hi, On 10/15/19 4:43 PM, Adam Weremczuk wrote: > Hi all, > > It started failing following Debian 9->10 and PVE 5->6 upgrade: > > pveam update > update failed - see /var/log/pveam.log for details > > "apt-key list" wasn't showing it so I've added it: > > wget https://github.com/turnkeylinux/turnkey-keyring/raw/master/turnkey-release-keyring.gpg > apt-key add turnkey-release-keyring.gpg > OK > > It's now listed and looks ok at the first glance: > > /etc/apt/trusted.gpg > -------------------- > pub?? rsa2048 2008-08-15 [SC] [expires: 2023-08-12] > ????? 694C FF26 795A 29BA E07B? 4EB5 85C2 5E95 A16E B94D > uid?????????? [ unknown] Turnkey Linux Release Key > > The errors in "pveam update" and pveam.log haven't gone away though: > > 2019-10-15 15:34:31 starting update > 2019-10-15 15:34:31 start download http://download.proxmox.com/images/aplinfo-pve-6.dat.asc > 2019-10-15 15:34:31 download finished: 200 OK > 2019-10-15 15:34:31 start download http://download.proxmox.com/images/aplinfo-pve-6.dat.gz > 2019-10-15 15:34:31 download finished: 200 OK > 2019-10-15 15:34:31 signature verification: gpgv: Signature made Fri Sep 27 14:53:26 2019 BST > 2019-10-15 15:34:31 signature verification: gpgv: using RSA key 353479F83781D7F8ED5F5AC57BF2812E8A6E88E0 > 2019-10-15 15:34:31 signature verification: gpgv: Can't check signature: No public key > 2019-10-15 15:34:31 unable to verify signature - command '/usr/bin/gpgv -q --keyring /usr/share/doc/pve-manager/trustedkeys.gpg /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.31480.asc /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.31480' failed: exit code 2 > 2019-10-15 15:34:31 start download https://releases.turnkeylinux.org/pve/aplinfo.dat.asc > 2019-10-15 15:34:31 download finished: 200 OK > 2019-10-15 15:34:31 start download https://releases.turnkeylinux.org/pve/aplinfo.dat.gz > 2019-10-15 15:34:32 download finished: 200 OK > 2019-10-15 15:34:32 signature verification: gpgv: Signature made Sun Aug? 4 08:49:59 2019 BST > 2019-10-15 15:34:32 signature verification: gpgv: using RSA key 694CFF26795A29BAE07B4EB585C25E95A16EB94D > 2019-10-15 15:34:32 signature verification: gpgv: Good signature from "Turnkey Linux Release Key " > 2019-10-15 15:34:32 update successful > > Am I doing something wrong? > No, we were doing something wrong :/ So the trusted keys is not updated all the time, it would normally be updated when a new file was added, but in our case the build happens in a temporary directory with all times having the same timestamp - so GNU make did not know that it needs to regenerate the trusted key file. As keys are added/removed in a frequency of ~ 2 years this was forgotten to do here by manually running the update target in the source and committing to git. I'll fix this up and release a follow up pve-manager soon, thanks for the report and sorry for any inconvenience caused. cheers, Thomas From gilberto.nunes32 at gmail.com Tue Oct 22 14:42:51 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 22 Oct 2019 09:42:51 -0300 Subject: [PVE-User] Strange behavior vzdump Message-ID: Hi there I have notice that vzdump options, maxfiles doesn't work properly. I set --maxfiles to 10, but still it's hold old files... For now, I add --remove 1, to the /etc/vzdump.conf, but, according to the vzdump man page, default --remove set is 1, i.e., enable! Why vzdump do not remove old backup, just when set maxfiles?? Or even worst, if --remove 1 is the default options, why vzdump doesn't work?? Proxmox VE version 5.4 Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From gilberto.nunes32 at gmail.com Tue Oct 22 14:43:54 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 22 Oct 2019 09:43:54 -0300 Subject: [PVE-User] VMID clarifying Message-ID: Folks, When you create a VM, it generates an ID, for example 100, 101, 102 ... etc ... By removing this VM 101 let's say, and then creating a new one, I noticed that it generates this new one with ID 101 again. But I also realized that it takes the backups I had from the old VM 101 and links to this new one, even though it's OSs and everything else. My fear is that running a backup routine will overwrite the images I had with that ID. It would probably happen if I had not realized. Any way to use sequential ID and not go back in IDs? I do not know if i was clear.... --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From f.gruenbichler at proxmox.com Tue Oct 22 15:28:02 2019 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Tue, 22 Oct 2019 15:28:02 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: References: Message-ID: <1571750716.27pvuattzw.astroid@nora.none> On October 22, 2019 2:43 pm, Gilberto Nunes wrote: > Folks, > When you create a VM, it generates an ID, for example 100, 101, 102 ... etc no. when you create a VM in the GUI, it suggests the first free slot in the guest ID range. you can choose whatever you want ;) > ... > By removing this VM 101 let's say, and then creating a new one, I noticed > that it generates this new one with ID 101 again. only if you don't set another ID. > But I also realized that it takes the backups I had from the old VM 101 and > links to this new one, even though it's OSs and everything else. My fear is > that running a backup routine will overwrite the images I had with that ID. > It would probably happen if I had not realized. the solution is to either not delete VMs that are still important (even if 'important' just means having the associated backups semi-protected) > Any way to use sequential ID and not go back in IDs? I do not know if i was > clear.... or don't use the default VMID suggestion by the GUI. there is no way to change that behaviour, since we don't have a record of "IDs that (might) have been used at some point in the past" From gaio at sv.lnf.it Tue Oct 22 15:44:20 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 22 Oct 2019 15:44:20 +0200 Subject: [PVE-User] Strange behavior vzdump In-Reply-To: References: Message-ID: <20191022134420.GN6111@sv.lnf.it> Mandi! Gilberto Nunes In chel di` si favelave... > I have notice that vzdump options, maxfiles doesn't work properly. > I set --maxfiles to 10, but still it's hold old files... > For now, I add --remove 1, to the /etc/vzdump.conf, but, according to the > vzdump man page, default --remove set is 1, i.e., enable! > Why vzdump do not remove old backup, just when set maxfiles?? > Or even worst, if --remove 1 is the default options, why vzdump doesn't > work?? > Proxmox VE version 5.4 This make some noise on my ear... two clusters, one with ''traditional'' iSCSI SAN storage, one with Ceph. On Ceph one: root at hulk:~# ls /srv/pve/dump/ | grep \.lzo | cut -d '-' -f 1-3 | sort | uniq -c 1 vzdump-lxc-103 1 vzdump-lxc-105 1 vzdump-lxc-106 1 vzdump-lxc-109 3 vzdump-lxc-111 50 vzdump-lxc-114 49 vzdump-lxc-117 1 vzdump-qemu-104 3 vzdump-qemu-108 3 vzdump-qemu-113 1 vzdump-qemu-115 49 vzdump-qemu-116 My backup stategy is: + for some VM/LXC, daily backup (114, 116, 117 are 'daily') all day's week apart saturday. + for all VM/LXC, on saturday bacula pre-script that run the backup, and then bacula put on tape. the bacula 'pre' script do: /usr/bin/vzdump 117 -storage Backup -maxfiles 1 -remove -compress lzo -mode suspend -quiet -mailto ced at sv.lnf.it -mailnotification failure for every LXC/VM, and as you can see, delete old backup only for some VM/LXC, not all. 'backup' storage is defined as: nfs: Backup export /srv/pve path /mnt/pve/Backup server 10.27.251.11 content vztmpl,images,iso,backup,rootdir maxfiles 0 options vers=3,soft,intr Clearly, no error in logs. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From lists at merit.unu.edu Tue Oct 22 16:05:21 2019 From: lists at merit.unu.edu (lists) Date: Tue, 22 Oct 2019 16:05:21 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: <1571750716.27pvuattzw.astroid@nora.none> References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: Hi, Actually, we feel the same as Gilberto. Could proxmox not for example default to something like: highest currently-in-use-number PLUS 1? MJ On 22-10-2019 15:28, Fabian Gr?nbichler wrote: > On October 22, 2019 2:43 pm, Gilberto Nunes wrote: >> Folks, >> When you create a VM, it generates an ID, for example 100, 101, 102 ... etc > > no. when you create a VM in the GUI, it suggests the first free slot in > the guest ID range. you can choose whatever you want ;) > >> ... >> By removing this VM 101 let's say, and then creating a new one, I noticed >> that it generates this new one with ID 101 again. > > only if you don't set another ID. > >> But I also realized that it takes the backups I had from the old VM 101 and >> links to this new one, even though it's OSs and everything else. My fear is >> that running a backup routine will overwrite the images I had with that ID. >> It would probably happen if I had not realized. > > the solution is to either not delete VMs that are still important (even > if 'important' just means having the associated backups semi-protected) > >> Any way to use sequential ID and not go back in IDs? I do not know if i was >> clear.... > > or don't use the default VMID suggestion by the GUI. there is no way to > change that behaviour, since we don't have a record of "IDs that (might) > have been used at some point in the past" > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Tue Oct 22 16:12:21 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 22 Oct 2019 11:12:21 -0300 Subject: [PVE-User] VMID clarifying In-Reply-To: References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: or better, start with VMID 1, then +n.... --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em ter, 22 de out de 2019 ?s 11:05, lists escreveu: > Hi, > > Actually, we feel the same as Gilberto. > > Could proxmox not for example default to something like: highest > currently-in-use-number PLUS 1? > > MJ > > On 22-10-2019 15:28, Fabian Gr?nbichler wrote: > > On October 22, 2019 2:43 pm, Gilberto Nunes wrote: > >> Folks, > >> When you create a VM, it generates an ID, for example 100, 101, 102 ... > etc > > > > no. when you create a VM in the GUI, it suggests the first free slot in > > the guest ID range. you can choose whatever you want ;) > > > >> ... > >> By removing this VM 101 let's say, and then creating a new one, I > noticed > >> that it generates this new one with ID 101 again. > > > > only if you don't set another ID. > > > >> But I also realized that it takes the backups I had from the old VM 101 > and > >> links to this new one, even though it's OSs and everything else. My > fear is > >> that running a backup routine will overwrite the images I had with that > ID. > >> It would probably happen if I had not realized. > > > > the solution is to either not delete VMs that are still important (even > > if 'important' just means having the associated backups semi-protected) > > > >> Any way to use sequential ID and not go back in IDs? I do not know if i > was > >> clear.... > > > > or don't use the default VMID suggestion by the GUI. there is no way to > > change that behaviour, since we don't have a record of "IDs that (might) > > have been used at some point in the past" > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From d.csapak at proxmox.com Tue Oct 22 16:19:18 2019 From: d.csapak at proxmox.com (Dominik Csapak) Date: Tue, 22 Oct 2019 16:19:18 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: there was already a lengthy discussion of this topic on the bugtracker see https://bugzilla.proxmox.com/show_bug.cgi?id=1822 From lists at merit.unu.edu Tue Oct 22 16:47:43 2019 From: lists at merit.unu.edu (lists) Date: Tue, 22 Oct 2019 16:47:43 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: ok, have read it. Pity, the outcome. Reading that the final suggestion is: using a random number, then perhaps pve could simply suggest a random PVID number between 1 and 4 billion. (and if already in use: choose another random number) No need to store anything anywhere, and chances of duplicating a PVID would be virtually zero. MJ On 22-10-2019 16:19, Dominik Csapak wrote: > there was already a lengthy discussion of this topic on the bugtracker > see https://bugzilla.proxmox.com/show_bug.cgi?id=1822 > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From pve at junkyard.4t2.com Tue Oct 22 19:13:29 2019 From: pve at junkyard.4t2.com (Tom Weber) Date: Tue, 22 Oct 2019 19:13:29 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: <400e99c0227195fa41c3b6c8c499755ba02c3711.camel@junkyard.4t2.com> and then others will argue that they don't want that space polluted by random numbers... (i'm personnaly encoding IPs in the PVID and let the lower numbers for testing) it's simple, if your using the GUI, just enter a random number (maybe c&p from a rnd generator). if your using an api call, just let your caller generate a random number. Tom Am Dienstag, den 22.10.2019, 16:47 +0200 schrieb lists: > ok, have read it. > > Pity, the outcome. > > Reading that the final suggestion is: using a random number, then > perhaps pve could simply suggest a random PVID number between 1 and > 4 > billion. > > (and if already in use: choose another random number) > > No need to store anything anywhere, and chances of duplicating a > PVID > would be virtually zero. > > MJ > > On 22-10-2019 16:19, Dominik Csapak wrote: > > there was already a lengthy discussion of this topic on the > > bugtracker > > see https://bugzilla.proxmox.com/show_bug.cgi?id=1822 > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From piccardi at truelite.it Wed Oct 23 10:29:41 2019 From: piccardi at truelite.it (Simone Piccardi) Date: Wed, 23 Oct 2019 10:29:41 +0200 Subject: [PVE-User] VMID clarifying In-Reply-To: <400e99c0227195fa41c3b6c8c499755ba02c3711.camel@junkyard.4t2.com> References: <1571750716.27pvuattzw.astroid@nora.none> <400e99c0227195fa41c3b6c8c499755ba02c3711.camel@junkyard.4t2.com> Message-ID: <165238cf-9aa2-5688-d706-deed6bab0301@truelite.it> Il 22/10/19 19:13, Tom Weber ha scritto: > and then others will argue that they don't want that space polluted by > random numbers... (i'm personnaly encoding IPs in the PVID and let the > lower numbers for testing) > I will argue that for sure. If you want to preserve old backups with same VMID just move end/or rename the file. Simone From gilberto.nunes32 at gmail.com Wed Oct 23 13:40:10 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 23 Oct 2019 08:40:10 -0300 Subject: [PVE-User] Strange behavior vzdump In-Reply-To: <20191022134420.GN6111@sv.lnf.it> References: <20191022134420.GN6111@sv.lnf.it> Message-ID: I am thing my problems comes after install https://github.com/ayufan/pve-patches I remove it but get this problem... --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em ter, 22 de out de 2019 ?s 10:44, Marco Gaiarin escreveu: > Mandi! Gilberto Nunes > In chel di` si favelave... > > > I have notice that vzdump options, maxfiles doesn't work properly. > > I set --maxfiles to 10, but still it's hold old files... > > For now, I add --remove 1, to the /etc/vzdump.conf, but, according to the > > vzdump man page, default --remove set is 1, i.e., enable! > > Why vzdump do not remove old backup, just when set maxfiles?? > > Or even worst, if --remove 1 is the default options, why vzdump doesn't > > work?? > > Proxmox VE version 5.4 > > This make some noise on my ear... two clusters, one with > ''traditional'' iSCSI SAN storage, one with Ceph. > > On Ceph one: > > root at hulk:~# ls /srv/pve/dump/ | grep \.lzo | cut -d '-' -f 1-3 | sort | > uniq -c > 1 vzdump-lxc-103 > 1 vzdump-lxc-105 > 1 vzdump-lxc-106 > 1 vzdump-lxc-109 > 3 vzdump-lxc-111 > 50 vzdump-lxc-114 > 49 vzdump-lxc-117 > 1 vzdump-qemu-104 > 3 vzdump-qemu-108 > 3 vzdump-qemu-113 > 1 vzdump-qemu-115 > 49 vzdump-qemu-116 > > My backup stategy is: > > + for some VM/LXC, daily backup (114, 116, 117 are 'daily') all day's week > apart saturday. > > + for all VM/LXC, on saturday bacula pre-script that run the backup, and > then > bacula put on tape. > > the bacula 'pre' script do: > > /usr/bin/vzdump 117 -storage Backup -maxfiles 1 -remove -compress > lzo -mode suspend -quiet -mailto ced at sv.lnf.it -mailnotification failure > > for every LXC/VM, and as you can see, delete old backup only for some > VM/LXC, not all. > > 'backup' storage is defined as: > > nfs: Backup > export /srv/pve > path /mnt/pve/Backup > server 10.27.251.11 > content vztmpl,images,iso,backup,rootdir > maxfiles 0 > options vers=3,soft,intr > > Clearly, no error in logs. > > -- > dott. Marco Gaiarin GNUPG Key ID: > 240A3D66 > Associazione ``La Nostra Famiglia'' > http://www.lanostrafamiglia.it/ > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento > (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f > +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gaio at sv.lnf.it Wed Oct 23 16:16:19 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 23 Oct 2019 16:16:19 +0200 Subject: [PVE-User] Strange behavior vzdump In-Reply-To: References: <20191022134420.GN6111@sv.lnf.it> Message-ID: <20191023141619.GH3589@sv.lnf.it> Mandi! Gilberto Nunes In chel di` si favelave... > I am thing my problems comes after install > https://github.com/ayufan/pve-patches no, i've never used that patchset... -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From damkobaranov at gmail.com Thu Oct 24 19:05:57 2019 From: damkobaranov at gmail.com (Demetri A. Mkobaranov) Date: Thu, 24 Oct 2019 19:05:57 +0200 Subject: [PVE-User] How to list templates via CLI Message-ID: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> Hello, new user here. how can I list templates using CLI? qm list doesn't help me, I just see all the vms and I can't differentiate. Thanks From robert.strzelecki at freeola.co.uk Thu Oct 24 19:18:51 2019 From: robert.strzelecki at freeola.co.uk (Robert Strzelecki) Date: Thu, 24 Oct 2019 18:18:51 +0100 Subject: [PVE-User] How to list templates via CLI In-Reply-To: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> References: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> Message-ID: <3ec1ba09-8052-8973-e407-790b68c69236@freeola.co.uk> Hi, Take a look at https://pve.proxmox.com/pve-docs/api-viewer/ Rob On 24/10/2019 18:05, Demetri A. Mkobaranov wrote: > Hello, new user here. > > how can I list templates using CLI? qm list doesn't help me, I just > see all the vms and I can't differentiate. > > Thanks > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From robert.strzelecki at freeola.co.uk Thu Oct 24 19:27:06 2019 From: robert.strzelecki at freeola.co.uk (Robert Strzelecki) Date: Thu, 24 Oct 2019 18:27:06 +0100 Subject: [PVE-User] How to list templates via CLI In-Reply-To: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> References: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> Message-ID: Actually, sorry if you meant VM templates. For those check https://pve.proxmox.com/pve-docs/api-viewer/ nodes -> qemu - can't seem to link direct to the particular command (pvesh get /nodes/{node}/qemu) It returns a 'template' property (seemingly undocumented) that is either blank or the name of the template so you could just script to loop through all the VM's and extract the ones with a populate 'template' property. Rob On 24/10/2019 18:05, Demetri A. Mkobaranov wrote: > Hello, new user here. > > how can I list templates using CLI? qm list doesn't help me, I just > see all the vms and I can't differentiate. > > Thanks > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Kind regards, Rob Strzelecki, Systems & Programming Manager. Tel: 01376 55 60 60 Fax: 01376 55 60 79 Web: http://freeola.com Freeola ------------------------------------------------------------------------ Freeola Limited registered in England number 5335999. VAT reference 859 1100 32 Registered office and trading address 94 Church Street, Bocking, Braintree, Essex, CM7 5JY Products and services are subject to terms and conditions which are available online at http://freeola.com/support/ For Customer Support please call 01376 55 60 60. ------------------------------------------------------------------------ From gaio at sv.lnf.it Fri Oct 25 10:59:48 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Fri, 25 Oct 2019 10:59:48 +0200 Subject: [PVE-User] P2V of an XP box, VM ask for fwcfg driver... Message-ID: <20191025085948.GH5664@sv.lnf.it> Ok, it is a bit late to P2V a XP box, but... P2V done (on PVE 5.4), now the VMs ask at every boot to install the 'fwcfg' driver (qemufwcfg), but virtio-win CD 0.1.171 have only versions for win7+ OS. I can safely install driver for newer OS? Or where i can find (old?) driver for XP? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From gianni.milo22 at gmail.com Fri Oct 25 14:58:12 2019 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Fri, 25 Oct 2019 13:58:12 +0100 Subject: [PVE-User] P2V of an XP box, VM ask for fwcfg driver... In-Reply-To: <20191025085948.GH5664@sv.lnf.it> References: <20191025085948.GH5664@sv.lnf.it> Message-ID: Check if you can find something useful in here ... https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/ On Fri, 25 Oct 2019 at 09:59, Marco Gaiarin wrote: > > Ok, it is a bit late to P2V a XP box, but... > > > P2V done (on PVE 5.4), now the VMs ask at every boot to install the > 'fwcfg' driver (qemufwcfg), but virtio-win CD 0.1.171 have only > versions for win7+ OS. > > I can safely install driver for newer OS? Or where i can find (old?) > driver for XP? > > > Thanks. > > -- > dott. Marco Gaiarin GNUPG Key ID: > 240A3D66 > Associazione ``La Nostra Famiglia'' > http://www.lanostrafamiglia.it/ > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento > (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f > +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From nada at verdnatura.es Fri Oct 25 15:40:29 2019 From: nada at verdnatura.es (nada at verdnatura.es) Date: Fri, 25 Oct 2019 15:40:29 +0200 Subject: [PVE-User] How to list templates via CLI In-Reply-To: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> References: <85104d80-105c-5c3b-fa44-66cb2e95341e@gmail.com> Message-ID: hi Demetri to manage templates you may use these PVE commands # pveam list local ... you will see just templates at your local node ... and in case you need to download new one # pveam update # pveam available --section system # pveam download local ubuntu-18.10-standard_18.10-1_amd64.tar.gz ... in case you wanna to save some space and delete old one # pveam remove local:vztmpl/debian-9.0-standard_9.0-2_amd64.tar.gz BTW they are in pwd /var/lib/vz/template/cache/ hope it helps have a nice weekend Nada El 2019-10-24 19:05, Demetri A. Mkobaranov escribi?: > Hello, new user here. > > how can I list templates using CLI? qm list doesn't help me, I just > see all the vms and I can't differentiate. > > Thanks > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From f.cuseo at panservice.it Mon Oct 28 15:47:18 2019 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 28 Oct 2019 15:47:18 +0100 (CET) Subject: [PVE-User] SQL server 2014 poor performances Message-ID: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Hello. I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same problem with 4 x SAS 15k rpm drives. A VM with Windows Server 2016, and SQL Server 2014. The SQL performances are very poor if compared with a standard desktop pc with i5 and a single consumer grade SSD. There is some tweak like "Trace Flag T8038" to apply to SQL 2014 and/or 2017 ? Thanks in advance, Fabrizio Cuseo From jm at ginernet.com Mon Oct 28 15:59:05 2019 From: jm at ginernet.com (=?UTF-8?Q?Jos=c3=a9_Manuel_Giner?=) Date: Mon, 28 Oct 2019 15:59:05 +0100 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <0224f774-4087-6b97-1049-77f66d0bc02b@ginernet.com> What is the exact CPU model? On 28/10/2019 15:47, Fabrizio Cuseo wrote: > Poweredge R710 dual xeon -- Jos? Manuel Giner https://ginernet.com From f.cuseo at panservice.it Mon Oct 28 16:11:37 2019 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 28 Oct 2019 16:11:37 +0100 (CET) Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <0224f774-4087-6b97-1049-77f66d0bc02b@ginernet.com> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> <0224f774-4087-6b97-1049-77f66d0bc02b@ginernet.com> Message-ID: <1891053974.20444.1572275497681.JavaMail.zimbra@zimbra.panservice.it> Thank you for your answer: CPU(s) 24 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (2 Sockets) The VM has 2 socket x 4 core, both KVM and host type, with NUMA enabled. ----- Il 28-ott-19, alle 15:59, Jos? Manuel Giner jm at ginernet.com ha scritto: > What is the exact CPU model? > > > > On 28/10/2019 15:47, Fabrizio Cuseo wrote: >> Poweredge R710 dual xeon > > -- > Jos? Manuel Giner > https://ginernet.com > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- --- From mir at miras.org Mon Oct 28 16:46:23 2019 From: mir at miras.org (Michael Rasmussen) Date: Mon, 28 Oct 2019 16:46:23 +0100 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <20191028164623.0d62983d@sleipner.datanom.net> On Mon, 28 Oct 2019 15:47:18 +0100 (CET) Fabrizio Cuseo wrote: > Hello. > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same > problem with 4 x SAS 15k rpm drives. > Are you sure it is SSD? I don't recollect that WD has produced WD blue as SSD. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E mir datanom net https://pgp.key-server.io/pks/lookup?search=0xE501F51C mir miras org https://pgp.key-server.io/pks/lookup?search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: Follow each decision as closely as possible with its associated action. - The Elements of Programming Style (Kernighan & Plaugher) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From robert.strzelecki at freeola.co.uk Mon Oct 28 16:55:31 2019 From: robert.strzelecki at freeola.co.uk (Robert Strzelecki) Date: Mon, 28 Oct 2019 15:55:31 +0000 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <0e6adfd2-f0e7-47e9-ac91-3b41c2722f69@freeola.co.uk> https://shop.westerndigital.com/products/internal-drives/wd-blue-3d-nand-sata-ssd#WDS250G2B0A On 28/10/2019 15:46, Michael Rasmussen via pve-user wrote: > On Mon, 28 Oct 2019 15:47:18 +0100 (CET) > Fabrizio Cuseo wrote: > >> Hello. >> I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS >> configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same >> problem with 4 x SAS 15k rpm drives. >> > Are you sure it is SSD? I don't recollect that WD has produced WD blue > as SSD. > -- Hilsen/Regards Michael Rasmussen > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Kind regards, Rob Strzelecki, From mark at openvs.co.uk Mon Oct 28 16:56:02 2019 From: mark at openvs.co.uk (Mark Adams) Date: Mon, 28 Oct 2019 15:56:02 +0000 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Message-ID: There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't use it in a server. Are you using the virtio-scsi blockdev and the newest virtio drivers? also, have you tried with writeback enabled? Have you tested the performance of your ssd zpool from the command line on the host? On Mon, 28 Oct 2019 at 15:46, Michael Rasmussen via pve-user < pve-user at pve.proxmox.com> wrote: > > > > ---------- Forwarded message ---------- > From: Michael Rasmussen > To: pve-user at pve.proxmox.com > Cc: > Bcc: > Date: Mon, 28 Oct 2019 16:46:23 +0100 > Subject: Re: [PVE-User] SQL server 2014 poor performances > On Mon, 28 Oct 2019 15:47:18 +0100 (CET) > Fabrizio Cuseo wrote: > > > Hello. > > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS > > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same > > problem with 4 x SAS 15k rpm drives. > > > Are you sure it is SSD? I don't recollect that WD has produced WD blue > as SSD. > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E > mir datanom net > https://pgp.key-server.io/pks/lookup?search=0xE501F51C > mir miras org > https://pgp.key-server.io/pks/lookup?search=0xE3E80917 > -------------------------------------------------------------- > /usr/games/fortune -es says: > Follow each decision as closely as possible with its associated action. > - The Elements of Programming Style (Kernighan & Plaugher) > > > > ---------- Forwarded message ---------- > From: Michael Rasmussen via pve-user > To: pve-user at pve.proxmox.com > Cc: Michael Rasmussen > Bcc: > Date: Mon, 28 Oct 2019 16:46:23 +0100 > Subject: Re: [PVE-User] SQL server 2014 poor performances > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.cuseo at panservice.it Mon Oct 28 17:15:05 2019 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 28 Oct 2019 17:15:05 +0100 (CET) Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <281880822.22556.1572279305437.JavaMail.zimbra@zimbra.panservice.it> ----- Il 28-ott-19, alle 16:56, Mark Adams mark at openvs.co.uk ha scritto: > There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't > use it in a server. Hello mark. I know that is a desktop drive, but we are only testing the different performances between this and a desktop pc with a similar (or cheaper) desktop ssd. > Are you using the virtio-scsi blockdev and the newest virtio drivers? also, I am using the virtio-scsi, and virtio drivers (not more than 1 year old version) > have you tried with writeback enabled? Not yet. > Have you tested the performance of your ssd zpool from the command line on > the host? Do you mean vzperf ? PS: i don't know if the bottleneck is I/O or some problem like the SQL "content switch" setting. PPS: SQL server is 2017, not 2014. > On Mon, 28 Oct 2019 at 15:46, Michael Rasmussen via pve-user < > pve-user at pve.proxmox.com> wrote: > >> >> >> >> ---------- Forwarded message ---------- >> From: Michael Rasmussen >> To: pve-user at pve.proxmox.com >> Cc: >> Bcc: >> Date: Mon, 28 Oct 2019 16:46:23 +0100 >> Subject: Re: [PVE-User] SQL server 2014 poor performances >> On Mon, 28 Oct 2019 15:47:18 +0100 (CET) >> Fabrizio Cuseo wrote: >> >> > Hello. >> > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS >> > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same >> > problem with 4 x SAS 15k rpm drives. >> > >> Are you sure it is SSD? I don't recollect that WD has produced WD blue >> as SSD. >> >> -- >> Hilsen/Regards >> Michael Rasmussen >> >> Get my public GnuPG keys: >> michael rasmussen cc >> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E >> mir datanom net >> https://pgp.key-server.io/pks/lookup?search=0xE501F51C >> mir miras org >> https://pgp.key-server.io/pks/lookup?search=0xE3E80917 >> -------------------------------------------------------------- >> /usr/games/fortune -es says: >> Follow each decision as closely as possible with its associated action. >> - The Elements of Programming Style (Kernighan & Plaugher) >> >> >> >> ---------- Forwarded message ---------- >> From: Michael Rasmussen via pve-user >> To: pve-user at pve.proxmox.com >> Cc: Michael Rasmussen >> Bcc: >> Date: Mon, 28 Oct 2019 16:46:23 +0100 >> Subject: Re: [PVE-User] SQL server 2014 poor performances >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From f.cuseo at panservice.it Mon Oct 28 17:16:16 2019 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Mon, 28 Oct 2019 17:16:16 +0100 (CET) Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <281880822.22556.1572279305437.JavaMail.zimbra@zimbra.panservice.it> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> <281880822.22556.1572279305437.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <1220563804.22563.1572279376989.JavaMail.zimbra@zimbra.panservice.it> Sorry, PVEPERF not VZPERF ----- Il 28-ott-19, alle 17:15, Fabrizio Cuseo f.cuseo at panservice.it ha scritto: > ----- Il 28-ott-19, alle 16:56, Mark Adams mark at openvs.co.uk ha scritto: > >> There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't >> use it in a server. > > Hello mark. > I know that is a desktop drive, but we are only testing the different > performances between this and a desktop pc with a similar (or cheaper) desktop > ssd. > > >> Are you using the virtio-scsi blockdev and the newest virtio drivers? also, > > I am using the virtio-scsi, and virtio drivers (not more than 1 year old > version) > >> have you tried with writeback enabled? > > Not yet. > > >> Have you tested the performance of your ssd zpool from the command line on >> the host? > > > Do you mean vzperf ? > > PS: i don't know if the bottleneck is I/O or some problem like the SQL "content > switch" setting. > > PPS: SQL server is 2017, not 2014. > > > >> On Mon, 28 Oct 2019 at 15:46, Michael Rasmussen via pve-user < >> pve-user at pve.proxmox.com> wrote: >> >>> >>> >>> >>> ---------- Forwarded message ---------- >>> From: Michael Rasmussen >>> To: pve-user at pve.proxmox.com >>> Cc: >>> Bcc: >>> Date: Mon, 28 Oct 2019 16:46:23 +0100 >>> Subject: Re: [PVE-User] SQL server 2014 poor performances >>> On Mon, 28 Oct 2019 15:47:18 +0100 (CET) >>> Fabrizio Cuseo wrote: >>> >>> > Hello. >>> > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS >>> > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same >>> > problem with 4 x SAS 15k rpm drives. >>> > >>> Are you sure it is SSD? I don't recollect that WD has produced WD blue >>> as SSD. >>> >>> -- >>> Hilsen/Regards >>> Michael Rasmussen >>> >>> Get my public GnuPG keys: >>> michael rasmussen cc >>> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E >>> mir datanom net >>> https://pgp.key-server.io/pks/lookup?search=0xE501F51C >>> mir miras org >>> https://pgp.key-server.io/pks/lookup?search=0xE3E80917 >>> -------------------------------------------------------------- >>> /usr/games/fortune -es says: >>> Follow each decision as closely as possible with its associated action. >>> - The Elements of Programming Style (Kernighan & Plaugher) >>> >>> >>> >>> ---------- Forwarded message ---------- >>> From: Michael Rasmussen via pve-user >>> To: pve-user at pve.proxmox.com >>> Cc: Michael Rasmussen >>> Bcc: >>> Date: Mon, 28 Oct 2019 16:46:23 +0100 >>> Subject: Re: [PVE-User] SQL server 2014 poor performances >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- > --- > Fabrizio Cuseo - mailto:f.cuseo at panservice.it > Direzione Generale - Panservice InterNetWorking > Servizi Professionali per Internet ed il Networking > Panservice e' associata AIIP - RIPE Local Registry > Phone: +39 0773 410020 - Fax: +39 0773 470219 > http://www.panservice.it mailto:info at panservice.it > Numero verde nazionale: 800 901492 > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From gianni.milo22 at gmail.com Mon Oct 28 18:26:46 2019 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Mon, 28 Oct 2019 17:26:46 +0000 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <281880822.22556.1572279305437.JavaMail.zimbra@zimbra.panservice.it> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> <281880822.22556.1572279305437.JavaMail.zimbra@zimbra.panservice.it> Message-ID: > > > Have you tested the performance of your ssd zpool from the command line > on > > the host? > > > Do you mean vzperf ? > I think he means doing zfs performance tests on the host itself rather than inside the VM. This is in order to rule out the possibility that the slow performance is caused by the virtualization layer(s). You can use tools like "zpool iostat" , "iostat" , "fio", "dd" etc, to benchmark the performance on the host. If the results are good on the host, then you can move further, optimizing the virtualization layer. My understanding is that zfs needs proper tuning when it's used for databases, but I might be wrong. Searching in zfs mailing list archives might give you some clues on this. Gianni From humbertos at ifsc.edu.br Tue Oct 29 12:41:19 2019 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Tue, 29 Oct 2019 08:41:19 -0300 (BRT) Subject: kernel panic after live migration Message-ID: <1821310515.1550661.1572349279417.JavaMail.zimbra@ifsc.edu.br> Hi everyone! Sometimes, when I do live migration, the VM stop with kernel panic after host change. I'm using CPU type kvm64. This happen which Debian 9 and Debian 10 AMD64. The kernel panic happen too when the hosts are same type. Anyone has the same problem? Informations about the hosts: 3 hosts CPU(s) 16 x AMD Opteron(TM) Processor 6212 (2 Sockets) Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 1 host CPU(s) 40 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (2 Sockets) Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 2 hosts CPU(s) 4 x Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (1 Socket) Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 1 host CPU(s) 16 x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz (1 Socket) Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec Best regards. Humberto From jm at ginernet.com Tue Oct 29 14:11:56 2019 From: jm at ginernet.com (=?UTF-8?Q?Jos=c3=a9_Manuel_Giner?=) Date: Tue, 29 Oct 2019 14:11:56 +0100 Subject: [PVE-User] SQL server 2014 poor performances In-Reply-To: <1891053974.20444.1572275497681.JavaMail.zimbra@zimbra.panservice.it> References: <1160773410.19887.1572274038519.JavaMail.zimbra@zimbra.panservice.it> <0224f774-4087-6b97-1049-77f66d0bc02b@ginernet.com> <1891053974.20444.1572275497681.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <28b81837-aae7-a5bf-087e-a5db05b04149@ginernet.com> Hello, a new i5 is much more powerfull than your X5650, especially in single core. Also I'm not sure if your Dell support SATA 3 (600 Mbps) or is limited to SATA 2 (300 Mbps) https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5650+%40+2.67GHz&id=1304 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-9600K+%40+3.70GHz&id=3337 On 28/10/2019 16:11, Fabrizio Cuseo wrote: > Thank you for your answer: > > CPU(s) 24 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (2 Sockets) > > The VM has 2 socket x 4 core, both KVM and host type, with NUMA enabled. > > > > ----- Il 28-ott-19, alle 15:59, Jos? Manuel Giner jm at ginernet.com ha scritto: > >> What is the exact CPU model? >> >> >> >> On 28/10/2019 15:47, Fabrizio Cuseo wrote: >>> Poweredge R710 dual xeon >> >> -- >> Jos? Manuel Giner >> https://ginernet.com >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Jos? Manuel Giner https://ginernet.com From brians at iptel.co Tue Oct 29 17:10:05 2019 From: brians at iptel.co (Brian :) Date: Tue, 29 Oct 2019 16:10:05 +0000 Subject: [PVE-User] kernel panic after live migration In-Reply-To: References: Message-ID: Hello You would need to be a bit more verbose if you expect help. Version of proxmox? What panics? Host, guest? Guest os? Server hardware? Disks? Any logs? As much relevant info as as you can provide... On Tuesday, October 29, 2019, Humberto Jose De Sousa via pve-user < pve-user at pve.proxmox.com> wrote: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From humbertos at ifsc.edu.br Tue Oct 29 20:11:26 2019 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Tue, 29 Oct 2019 16:11:26 -0300 (BRT) Subject: [PVE-User] kernel panic after live migration In-Reply-To: References: Message-ID: <1836784658.1976534.1572376286506.JavaMail.zimbra@ifsc.edu.br> Thanks for answer Version of proxmox? pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-21-pve) What panics? Host, guest? Error only on guest https://postimg.cc/1nHpLPWq https://postimg.cc/ygBmQWTK Guest os? Debian 9 and Debian 10 Server hardware? 3 hosts CPU(s) 16 x AMD Opteron(TM) Processor 6212 (2 Sockets) 64 GB RAM Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 1 host CPU(s) 40 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (2 Sockets) 192 GB RAM Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 2 hosts CPU(s) 4 x Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (1 Socket) 32 GB RAM Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec 1 host CPU(s) 16 x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz (1 Socket) 32 GB RAM Kernel Version Linux 4.15.18-21-pve #1 SMP PVE 4.15.18-48 (Fri, 20 Sep 2019 11:28:30 +0200) PVE Manager Version pve-manager/5.4-13/aee6f0ec Disks? All VMs use rbd disk in cluster by pveceph (Ceph 12). Any logs? I don't found the error on logs :/ De: "Brian :" Para: "PVE User List" Cc: "Humberto Jose De Sousa" Enviadas: Ter?a-feira, 29 de outubro de 2019 13:10:05 Assunto: Re: [PVE-User] kernel panic after live migration Hello You would need to be a bit more verbose if you expect help. Version of proxmox? What panics? Host, guest? Guest os? Server hardware? Disks? Any logs? As much relevant info as as you can provide... On Tuesday, October 29, 2019, Humberto Jose De Sousa via pve-user < pve-user at pve.proxmox.com > wrote: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Wed Oct 30 13:05:15 2019 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Wed, 30 Oct 2019 13:05:15 +0100 (CET) Subject: [PVE-User] kernel panic after live migration In-Reply-To: References: Message-ID: <891296030.1058470.1572437115990.JavaMail.zimbra@odiso.com> is it between amd and intel host ? because, in past, it was never stable. (I had also problem between different amd generation). ----- Mail original ----- De: "proxmoxve" ?: "proxmoxve" Cc: "Humberto Jose De Sousa" Envoy?: Mardi 29 Octobre 2019 12:41:19 Objet: [PVE-User] kernel panic after live migration _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Wed Oct 30 15:15:24 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 30 Oct 2019 11:15:24 -0300 Subject: [PVE-User] Add SSD journal to OSD's Message-ID: Hi there I have cluster w/ 5 pve ceph servers. One of this servers lost the entire journal SSD device and I need to add a new one. I do not have the old ssd anymore.. So, bellow is the steps I intend to follow... Just need that you guys let me know if this steps are correct or not and if you could, add some advices. 1 - Set noout 2 - Destroy the OSD that already down 3 - Recreate the OSD on the servers that SSD failed This steps will do the ceph to rebalance the data, right? Can I lose data? My cluster right now, have some inconsistence: ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 21.181 is active+clean+inconsistent, acting [18,2,6] Is there a problem to do the steps above with the cluster in this state? This PG is set on other OSD... OSD.18 which is laying in other server... Thanks a lot for any help --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From humbertos at ifsc.edu.br Wed Oct 30 15:55:06 2019 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Wed, 30 Oct 2019 11:55:06 -0300 (BRT) Subject: [PVE-User] kernel panic after live migration In-Reply-To: <891296030.1058470.1572437115990.JavaMail.zimbra@odiso.com> References: <891296030.1058470.1572437115990.JavaMail.zimbra@odiso.com> Message-ID: <362604519.2496680.1572447306234.JavaMail.zimbra@ifsc.edu.br> Today one vm happened error when migrate amd to intel. But sometimes don't happen. Sometimes happen with same CPU type. De: "Alexandre DERUMIER" Para: "proxmoxve" Cc: "Humberto Jose De Sousa" Enviadas: Quarta-feira, 30 de outubro de 2019 9:05:15 Assunto: Re: [PVE-User] kernel panic after live migration is it between amd and intel host ? because, in past, it was never stable. (I had also problem between different amd generation). ----- Mail original ----- De: "proxmoxve" ?: "proxmoxve" Cc: "Humberto Jose De Sousa" Envoy?: Mardi 29 Octobre 2019 12:41:19 Objet: [PVE-User] kernel panic after live migration _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From hermann at qwer.tk Wed Oct 30 19:02:47 2019 From: hermann at qwer.tk (Hermann Himmelbauer) Date: Wed, 30 Oct 2019 19:02:47 +0100 Subject: [PVE-User] Upgrade PVE 5 -> 6: Switch to predictable network names? Message-ID: Hi, I'm currently migrating my test cluster from PVE 5 to 6. As this also means migrating from Debian 9 -> 10, I wonder if I have to switch to predictable network names? In the Debian 10 migration docs, this seems to be needed (e.g. eth0 -> ensxxx), e.g. like the following: rgrep -w eth0 /etc udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null -> replace eth-names with new predictable name rm /etc/udev/rules.d/70-persistent-net.rules update-initramfs -u -> reboot So, should I do this? Or does proxmox handle this in some other way? Best Regards, Hermann -- hermann at qwer.tk PGP/GPG: 299893C7 (on keyservers) From hermann at qwer.tk Thu Oct 31 12:01:11 2019 From: hermann at qwer.tk (Hermann Himmelbauer) Date: Thu, 31 Oct 2019 12:01:11 +0100 Subject: [PVE-User] Upgrade PVE 5 -> 6: Switch to predictable network names? In-Reply-To: References: Message-ID: <806b2bef-0b20-972a-0af0-aac17eb1c80b@qwer.tk> Hi, Just to let you know: I manually renamed the eth0 / eth1 network interfaces to the predictable names, this works fine. Renaming can be done in Debian9/Proxmox5 before the upgrade or right after the upgrade before the reboot. I also have Infiniband interfaces in my nodes (ib0), these cannot be renamed in Debian9, so the renaming has to be done either before the reboot or afterwards. Btw - updating my test-cluster worked like a charm from Proxmox 4 -> 5 -> 6, so, congratulations to the Proxmox team! Best Regards, Hermann Am 30.10.19 um 19:02 schrieb Hermann Himmelbauer: > Hi, > I'm currently migrating my test cluster from PVE 5 to 6. As this also > means migrating from Debian 9 -> 10, I wonder if I have to switch to > predictable network names? > > In the Debian 10 migration docs, this seems to be needed (e.g. eth0 -> > ensxxx), e.g. like the following: > > rgrep -w eth0 /etc > udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null > -> replace eth-names with new predictable name > rm /etc/udev/rules.d/70-persistent-net.rules > update-initramfs -u > -> reboot > > So, should I do this? Or does proxmox handle this in some other way? > > Best Regards, > Hermann > -- hermann at qwer.tk PGP/GPG: 299893C7 (on keyservers)