From gaio at sv.lnf.it Tue Nov 5 18:30:56 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 5 Nov 2019 18:30:56 +0100 Subject: [PVE-User] Container restore with pct, --rootfs syntiax? Message-ID: <20191105173056.GL2705@sv.lnf.it> I need to 'resize' (shrink) a container, so i've done a backup, and following some gogle-fu and 'pct' manpage i've done: root at tma-18:~# pct restore 130 /mnt/pve/backup/dump/vzdump-lxc-130-2019_11_05-17_39_08.tar.lzo --rootfs volume=local,size=4G --force unable to parse volume ID 'local' finally i've done: root at tma-18:~# pct restore 130 /mnt/pve/backup/dump/vzdump-lxc-130-2019_11_05-17_39_08.tar.lzo --rootfs volume=local:4,size=4G --force Formatting '/var/lib/vz/images/130/vm-130-disk-1.raw', fmt=raw size=4294967296 [...] and work as expected, but seems very very strange to me... What are the correct syntiax? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From f.gruenbichler at proxmox.com Wed Nov 6 08:16:37 2019 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 06 Nov 2019 08:16:37 +0100 Subject: [PVE-User] Container restore with pct, --rootfs syntiax? In-Reply-To: <20191105173056.GL2705@sv.lnf.it> References: <20191105173056.GL2705@sv.lnf.it> Message-ID: <1573024270.ds5bir6tb3.astroid@nora.none> On November 5, 2019 6:30 pm, Marco Gaiarin wrote: > > I need to 'resize' (shrink) a container, so i've done a backup, and > following some gogle-fu and 'pct' manpage i've done: > > root at tma-18:~# pct restore 130 /mnt/pve/backup/dump/vzdump-lxc-130-2019_11_05-17_39_08.tar.lzo --rootfs volume=local,size=4G --force > unable to parse volume ID 'local' > > finally i've done: > > root at tma-18:~# pct restore 130 /mnt/pve/backup/dump/vzdump-lxc-130-2019_11_05-17_39_08.tar.lzo --rootfs volume=local:4,size=4G --force > Formatting '/var/lib/vz/images/130/vm-130-disk-1.raw', fmt=raw size=4294967296 > [...] > > and work as expected, but seems very very strange to me... > > > What are the correct syntiax? Thanks. --rootfs STORAGE:SIZE_IN_GB e.g., --rootfs local:4 see 'Storage Backed Mount Points' in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_settings From gaio at sv.lnf.it Wed Nov 6 10:18:47 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 6 Nov 2019 10:18:47 +0100 Subject: [PVE-User] Container restore with pct, --rootfs syntiax? In-Reply-To: <1573024270.ds5bir6tb3.astroid@nora.none> References: <20191105173056.GL2705@sv.lnf.it> <1573024270.ds5bir6tb3.astroid@nora.none> Message-ID: <20191106091847.GE2759@sv.lnf.it> Mandi! Fabian Gr?nbichler In chel di` si favelave... > > What are the correct syntiax? Thanks. > --rootfs STORAGE:SIZE_IN_GB > e.g., > --rootfs local:4 > see 'Storage Backed Mount Points' in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_settings OK, i think i've had to read: Note The special option syntax STORAGE_ID:SIZE_IN_GB for storage backed mount point volumes will automatically allocate a volume of the specified size on the specified storage. E.g., calling pct set 100 -mp0 thin1:10,mp=/path/in/container will allocate a 10GB volume on the storage thin1 and replace the volume ID place holder 10 with the allocated volume ID. so the doc (and manpage) explain the configuration file format, not the pct commandline, right? Also, just i'm here: for VMs i can 'detach' additianal volumes, to prevent backup/restore to destroy them; why in LXC it is not possible, eg a restore 'destroy' all container volumes (and there's no way to detach it)? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From f.gruenbichler at proxmox.com Wed Nov 6 10:31:16 2019 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 06 Nov 2019 10:31:16 +0100 Subject: [PVE-User] Container restore with pct, --rootfs syntiax? In-Reply-To: <20191106091847.GE2759@sv.lnf.it> References: <20191105173056.GL2705@sv.lnf.it> <1573024270.ds5bir6tb3.astroid@nora.none> <20191106091847.GE2759@sv.lnf.it> Message-ID: <1573032522.nvb5d98yzv.astroid@nora.none> On November 6, 2019 10:18 am, Marco Gaiarin wrote: > Mandi! Fabian Gr?nbichler > In chel di` si favelave... > >> > What are the correct syntiax? Thanks. > >> --rootfs STORAGE:SIZE_IN_GB >> e.g., >> --rootfs local:4 >> see 'Storage Backed Mount Points' in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_settings > > OK, i think i've had to read: > > Note The special option syntax STORAGE_ID:SIZE_IN_GB for storage backed mount point volumes will automatically allocate a volume of the specified size on the specified storage. E.g., calling pct set 100 -mp0 thin1:10,mp=/path/in/container will allocate a 10GB volume on the storage thin1 and replace the volume ID place holder 10 with the allocated volume ID. > > so the doc (and manpage) explain the configuration file format, not the > pct commandline, right? they correspond almost 1:1 ;) > Also, just i'm here: for VMs i can 'detach' additianal volumes, to prevent > backup/restore to destroy them; why in LXC it is not possible, eg a > restore 'destroy' all container volumes (and there's no way to detach > it)? because qm restore and pct restore are implemented differently. 'pct restore' is more like 'pct create' with a backup archive+config as base instead of just a container template. 'qm restore' is really just restoring, with little possibility to change anything on the fly. From elacunza at binovo.es Thu Nov 7 15:35:38 2019 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 7 Nov 2019 15:35:38 +0100 Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> Message-ID: Hi all, We updated our office cluster to get the patch, but got a node reboot on 31th october. Node was fenced and rebooted, everything continued working OK. Is anyone experencing yet this problem? Cheers Eneko El 2/10/19 a las 18:09, Herv? Ballans escribi?: > Hi Alexandre, > > We encouter exactly the same problem as Laurent Caron (after upgrade > from 5 to 6). > > So I tried your patch 3 days ago, but unfortunately, the problem still > occurs... > > This is a really annoying problem, since sometimes, all the PVE nodes > of our cluster reboot quasi-simultaneously ! > And in the same time, we don't encounter this problem with our other > PVE cluster in version 5. > (And obviously we are waiting for a solution and a stable situation > before upgrade it !) > > It seems to be a unicast or corosync3 problem, but logs are not really > verbose at the time of reboot... > > Is there anything else to test ? > > Regards, > Herv? > > Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit?: >> Hi, >> >> a patch is available in pvetest >> >> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >> >> >> can you test it ? >> >> (you need to restart corosync after install of the deb) >> >> >> ----- Mail original ----- >> De: "Laurent CARON" >> ?: "proxmoxve" >> Envoy?: Lundi 16 Septembre 2019 09:55:34 >> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >> >> Hi, >> >> >> After upgrading our 4 node cluster from PVE 5 to 6, we experience >> constant crashed (once every 2 days). >> >> Those crashes seem related to corosync. >> >> Since numerous users are reporting sych issues (broken cluster after >> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >> corosync to version 2.4.4 without impacting functionnality ? >> >> Basic steps would be: >> >> On all nodes >> >> # systemctl stop pve-ha-lrm >> >> Once done, on all nodes: >> >> # systemctl stop pve-ha-crm >> >> Once done, on all nodes: >> >> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >> >> Then, once corosync has been downgraded, on all nodes >> >> # systemctl start pve-ha-lrm >> # systemctl start pve-ha-crm >> >> Would that work ? >> >> Thanks >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From aderumier at odiso.com Fri Nov 8 11:18:58 2019 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Fri, 8 Nov 2019 11:18:58 +0100 (CET) Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> Message-ID: <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> Hi, do you have upgrade all your nodes to corosync 3.0.2-pve4 libknet1:amd64 1.13-pve1 ? (available in pve-no-subscription et pve-enteprise repos) ----- Mail original ----- De: "Eneko Lacunza" ?: "proxmoxve" Envoy?: Jeudi 7 Novembre 2019 15:35:38 Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 Hi all, We updated our office cluster to get the patch, but got a node reboot on 31th october. Node was fenced and rebooted, everything continued working OK. Is anyone experencing yet this problem? Cheers Eneko El 2/10/19 a las 18:09, Herv? Ballans escribi?: > Hi Alexandre, > > We encouter exactly the same problem as Laurent Caron (after upgrade > from 5 to 6). > > So I tried your patch 3 days ago, but unfortunately, the problem still > occurs... > > This is a really annoying problem, since sometimes, all the PVE nodes > of our cluster reboot quasi-simultaneously ! > And in the same time, we don't encounter this problem with our other > PVE cluster in version 5. > (And obviously we are waiting for a solution and a stable situation > before upgrade it !) > > It seems to be a unicast or corosync3 problem, but logs are not really > verbose at the time of reboot... > > Is there anything else to test ? > > Regards, > Herv? > > Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit : >> Hi, >> >> a patch is available in pvetest >> >> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >> >> >> can you test it ? >> >> (you need to restart corosync after install of the deb) >> >> >> ----- Mail original ----- >> De: "Laurent CARON" >> ?: "proxmoxve" >> Envoy?: Lundi 16 Septembre 2019 09:55:34 >> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >> >> Hi, >> >> >> After upgrading our 4 node cluster from PVE 5 to 6, we experience >> constant crashed (once every 2 days). >> >> Those crashes seem related to corosync. >> >> Since numerous users are reporting sych issues (broken cluster after >> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >> corosync to version 2.4.4 without impacting functionnality ? >> >> Basic steps would be: >> >> On all nodes >> >> # systemctl stop pve-ha-lrm >> >> Once done, on all nodes: >> >> # systemctl stop pve-ha-crm >> >> Once done, on all nodes: >> >> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >> >> Then, once corosync has been downgraded, on all nodes >> >> # systemctl start pve-ha-lrm >> # systemctl start pve-ha-crm >> >> Would that work ? >> >> Thanks >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From karel.gonzalez at etecsa.cu Fri Nov 8 15:34:43 2019 From: karel.gonzalez at etecsa.cu (Karel Gonzalez Herrera) Date: Fri, 8 Nov 2019 09:34:43 -0500 Subject: vm no booting Message-ID: <5d40300c-5dd9-2644-4be9-4d5d58c08155@etecsa.cu> after a backup of one vm in proxmox 6.0 the vm stops booting any ideas sldssssssssssss Ing. Karel Gonz?lez Herrera Administrador de Red Etecsa: Direcci?n Territorial Norte e-mail: karel.gonzalez at etecsa.cu Tel: 8344973 8607483 Mov: 52182690 From gilberto.nunes32 at gmail.com Fri Nov 8 15:45:58 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Fri, 8 Nov 2019 11:45:58 -0300 Subject: [PVE-User] vm no booting In-Reply-To: References: Message-ID: any logs? dmesg syslog ??? --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em sex., 8 de nov. de 2019 ?s 11:35, Karel Gonzalez Herrera via pve-user < pve-user at pve.proxmox.com> escreveu: > > > > ---------- Forwarded message ---------- > From: Karel Gonzalez Herrera > To: pve-user at pve.proxmox.com > Cc: > Bcc: > Date: Fri, 8 Nov 2019 09:34:43 -0500 > Subject: vm no booting > after a backup of one vm in proxmox 6.0 the vm stops booting any ideas > sldssssssssssss > > > > > > Ing. Karel Gonz?lez Herrera > Administrador de Red > Etecsa: Direcci?n Territorial Norte > e-mail: karel.gonzalez at etecsa.cu > Tel: 8344973 8607483 > Mov: 52182690 > > > > > > ---------- Forwarded message ---------- > From: Karel Gonzalez Herrera via pve-user > To: pve-user at pve.proxmox.com > Cc: Karel Gonzalez Herrera > Bcc: > Date: Fri, 8 Nov 2019 09:34:43 -0500 > Subject: [PVE-User] vm no booting > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From mark at openvs.co.uk Fri Nov 8 16:22:08 2019 From: mark at openvs.co.uk (Mark Adams) Date: Fri, 8 Nov 2019 15:22:08 +0000 Subject: [PVE-User] Reboot on psu failure in redundant setup Message-ID: Hi All, This cluster is on 5.4-11. This is most probably a hardware issue either with ups or server psus, but wanted to check if there is any default watchdog or auto reboot in a proxmox HA cluster. Explanation of what happened: All servers have redundant psu, being fed from separate ups in separate racks on separate feeds. One of the UPS went out, and when it did all nodes rebooted. They were functioning normally after the reboot, but I wasn't expecting the reboot to occur. When the UPS went down, it also took down all of the core network because the power was not connected up in a redundant fashion. Ceph and "LAN" traffic was blocked because of this. Did a watchdog reboot each node because it lost contact with its cluster peers? I didn't configure it to do this myself, so is this an automatic feature? Everything I have read says it should be configured manually. Thanks in advance. Cheers, Mark From daniel at firewall-services.com Fri Nov 8 16:35:09 2019 From: daniel at firewall-services.com (Daniel Berteaud) Date: Fri, 8 Nov 2019 16:35:09 +0100 (CET) Subject: [PVE-User] Reboot on psu failure in redundant setup In-Reply-To: References: Message-ID: <1138891462.16344.1573227309978.JavaMail.zimbra@fws.fr> ----- Le 8 Nov 19, ? 16:22, Mark Adams mark at openvs.co.uk a ?crit : > Hi All, > > This cluster is on 5.4-11. > > This is most probably a hardware issue either with ups or server psus, but > wanted to check if there is any default watchdog or auto reboot in a > proxmox HA cluster. > > Explanation of what happened: > > All servers have redundant psu, being fed from separate ups in > separate racks on separate feeds. One of the UPS went out, and when it did > all nodes rebooted. They were functioning normally after the reboot, but I > wasn't expecting the reboot to occur. > > When the UPS went down, it also took down all of the core network because > the power was not connected up in a redundant fashion. Ceph and "LAN" > traffic was blocked because of this. Did a watchdog reboot each node > because it lost contact with its cluster peers? I didn't configure it to do > this myself, so is this an automatic feature? Everything I have read says > it should be configured manually. > > Thanks in advance. Yes, that's expected. If all nodes are isolated from each other, they will be self-fenced (using a software watchdog) to prevent any corruption and allow services to be recovered on the quorate part of the cluster. In your case, there was no quorate part, as there was no network at all. Cheers Daniel -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From t.lamprecht at proxmox.com Fri Nov 8 16:45:13 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 8 Nov 2019 16:45:13 +0100 Subject: [PVE-User] Reboot on psu failure in redundant setup In-Reply-To: <1138891462.16344.1573227309978.JavaMail.zimbra@fws.fr> References: <1138891462.16344.1573227309978.JavaMail.zimbra@fws.fr> Message-ID: Hi, On 11/8/19 4:35 PM, Daniel Berteaud wrote: > ----- Le 8 Nov 19, ? 16:22, Mark Adams mark at openvs.co.uk a ?crit : >> Hi All, >> >> This cluster is on 5.4-11. >> >> This is most probably a hardware issue either with ups or server psus, but >> wanted to check if there is any default watchdog or auto reboot in a >> proxmox HA cluster. >> >> Explanation of what happened: >> >> All servers have redundant psu, being fed from separate ups in >> separate racks on separate feeds. One of the UPS went out, and when it did >> all nodes rebooted. They were functioning normally after the reboot, but I >> wasn't expecting the reboot to occur. >> >> When the UPS went down, it also took down all of the core network because >> the power was not connected up in a redundant fashion. Ceph and "LAN" >> traffic was blocked because of this. Did a watchdog reboot each node >> because it lost contact with its cluster peers? I didn't configure it to do >> this myself, so is this an automatic feature? Everything I have read says >> it should be configured manually. >> >> Thanks in advance. > > Yes, that's expected. If all nodes are isolated from each other, they will be self-fenced (using a software watchdog) to prevent any corruption and allow services to be recovered on the quorate part of the cluster. In your case, there was no quorate part, as there was no network at all. Small addition, it can also be a HW Watchdog if configured[0]. And yes, as soon as you enable a HA service that node and the current HA manager node will enable and pull-up a watchdog. And, if the node hangs or there's a quorum loss for more than 60s, the watchdog updates will stop and the node will get self-fenced soon afterwards (not more than a few seconds). [0]: https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#_configure_hardware_watchdog cheers, Thomas From t.lamprecht at proxmox.com Fri Nov 8 16:48:50 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 8 Nov 2019 16:48:50 +0100 Subject: [PVE-User] Reboot on psu failure in redundant setup In-Reply-To: References: Message-ID: On 11/8/19 4:22 PM, Mark Adams wrote: > I didn't configure it to do > this myself, so is this an automatic feature? Everything I have read says > it should be configured manually. Maybe my previous mail did not answered this point in a good way. You need to configure *hardware-based* Watchdogs manually. But the fallback will *always* be the Linux Kernel Softdog (which is very reliable, from experience ^^) - else, without fencing, HA recovery could never be done in a safe way (double resource usage). cheers, Thomas From mark at openvs.co.uk Fri Nov 8 16:55:27 2019 From: mark at openvs.co.uk (Mark Adams) Date: Fri, 8 Nov 2019 15:55:27 +0000 Subject: [PVE-User] Reboot on psu failure in redundant setup In-Reply-To: References: Message-ID: Hi Daniel, Thomas, When I looked back at the docs after reading Daniels email I saw exactly what your saying Thomas, that its hardware watchdogs only which are disabled and need to be manually enabled, and that pve-ha-crm has a software watchdog enabled by default. Thanks for both your responses and clearing this up for me. Cheers, Mark On Fri, 8 Nov 2019 at 15:48, Thomas Lamprecht wrote: > On 11/8/19 4:22 PM, Mark Adams wrote: > > I didn't configure it to do > > this myself, so is this an automatic feature? Everything I have read says > > it should be configured manually. > > Maybe my previous mail did not answered this point in a good way. > > You need to configure *hardware-based* Watchdogs manually. But the > fallback will *always* be the Linux Kernel Softdog (which is very > reliable, from experience ^^) - else, without fencing, HA recovery > could never be done in a safe way (double resource usage). > > cheers, > Thomas > From aderumier at odiso.com Fri Nov 8 17:35:59 2019 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Fri, 8 Nov 2019 17:35:59 +0100 (CET) Subject: [PVE-User] VMID clarifying In-Reply-To: References: <1571750716.27pvuattzw.astroid@nora.none> Message-ID: <1862972186.1406604.1573230959572.JavaMail.zimbra@odiso.com> >>Reading that the final suggestion is: using a random number, then >>perhaps pve could simply suggest a random PVID number between 1 and 4 >>billion. use uuid in this case. The problem is that vm tap interfaces for example, use vmid in the name (tapi, and it's limited to 16characters by linux kernel. (so this should need some kind of table/conf mapping when taps are generated) ----- Mail original ----- De: "lists" ?: "proxmoxve" Envoy?: Mardi 22 Octobre 2019 16:47:43 Objet: Re: [PVE-User] VMID clarifying ok, have read it. Pity, the outcome. Reading that the final suggestion is: using a random number, then perhaps pve could simply suggest a random PVID number between 1 and 4 billion. (and if already in use: choose another random number) No need to store anything anywhere, and chances of duplicating a PVID would be virtually zero. MJ On 22-10-2019 16:19, Dominik Csapak wrote: > there was already a lengthy discussion of this topic on the bugtracker > see https://bugzilla.proxmox.com/show_bug.cgi?id=1822 > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Tue Nov 12 16:38:12 2019 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 12 Nov 2019 16:38:12 +0100 Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> Message-ID: Hi all, We are seeing this also with 5.4-3 clusters, a node was fenced in two different clusters without any apparent reason. Neither of the clusters had a node fence before... Cheers Eneko El 7/11/19 a las 15:35, Eneko Lacunza escribi?: > Hi all, > > We updated our office cluster to get the patch, but got a node reboot > on 31th october. Node was fenced and rebooted, everything continued > working OK. > > Is anyone experencing yet this problem? > > Cheers > Eneko > > El 2/10/19 a las 18:09, Herv? Ballans escribi?: >> Hi Alexandre, >> >> We encouter exactly the same problem as Laurent Caron (after upgrade >> from 5 to 6). >> >> So I tried your patch 3 days ago, but unfortunately, the problem >> still occurs... >> >> This is a really annoying problem, since sometimes, all the PVE nodes >> of our cluster reboot quasi-simultaneously ! >> And in the same time, we don't encounter this problem with our other >> PVE cluster in version 5. >> (And obviously we are waiting for a solution and a stable situation >> before upgrade it !) >> >> It seems to be a unicast or corosync3 problem, but logs are not >> really verbose at the time of reboot... >> >> Is there anything else to test ? >> >> Regards, >> Herv? >> >> Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit?: >>> Hi, >>> >>> a patch is available in pvetest >>> >>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >>> >>> >>> can you test it ? >>> >>> (you need to restart corosync after install of the deb) >>> >>> >>> ----- Mail original ----- >>> De: "Laurent CARON" >>> ?: "proxmoxve" >>> Envoy?: Lundi 16 Septembre 2019 09:55:34 >>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >>> >>> Hi, >>> >>> >>> After upgrading our 4 node cluster from PVE 5 to 6, we experience >>> constant crashed (once every 2 days). >>> >>> Those crashes seem related to corosync. >>> >>> Since numerous users are reporting sych issues (broken cluster after >>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >>> corosync to version 2.4.4 without impacting functionnality ? >>> >>> Basic steps would be: >>> >>> On all nodes >>> >>> # systemctl stop pve-ha-lrm >>> >>> Once done, on all nodes: >>> >>> # systemctl stop pve-ha-crm >>> >>> Once done, on all nodes: >>> >>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >>> >>> Then, once corosync has been downgraded, on all nodes >>> >>> # systemctl start pve-ha-lrm >>> # systemctl start pve-ha-crm >>> >>> Would that work ? >>> >>> Thanks >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From herve.ballans at ias.u-psud.fr Wed Nov 20 16:12:53 2019 From: herve.ballans at ias.u-psud.fr (=?UTF-8?Q?Herv=c3=a9_Ballans?=) Date: Wed, 20 Nov 2019 16:12:53 +0100 Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> Message-ID: <76b79e7e-4141-329b-8330-c82fe8ac8335@ias.u-psud.fr> Dear all, Since we upgraded to these versions (15 days ago), we don't encounter anymore the problem :) We are still waiting a few more days to be sure of stability but looks good ! Anyway, just for my education, is there someone here who can explain shortly what was the problem (unicast management ?) or who have a good link regarding this "behavior" ? Thanks ! Cheers, rv Le 08/11/2019 ? 11:18, Alexandre DERUMIER a ?crit?: > Hi, > > do you have upgrade all your nodes to > > corosync 3.0.2-pve4 > libknet1:amd64 1.13-pve1 > > > ? > > (available in pve-no-subscription et pve-enteprise repos) > > ----- Mail original ----- > De: "Eneko Lacunza" > ?: "proxmoxve" > Envoy?: Jeudi 7 Novembre 2019 15:35:38 > Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 > > Hi all, > > We updated our office cluster to get the patch, but got a node reboot on > 31th october. Node was fenced and rebooted, everything continued working OK. > > Is anyone experencing yet this problem? > > Cheers > Eneko > > El 2/10/19 a las 18:09, Herv? Ballans escribi?: >> Hi Alexandre, >> >> We encouter exactly the same problem as Laurent Caron (after upgrade >> from 5 to 6). >> >> So I tried your patch 3 days ago, but unfortunately, the problem still >> occurs... >> >> This is a really annoying problem, since sometimes, all the PVE nodes >> of our cluster reboot quasi-simultaneously ! >> And in the same time, we don't encounter this problem with our other >> PVE cluster in version 5. >> (And obviously we are waiting for a solution and a stable situation >> before upgrade it !) >> >> It seems to be a unicast or corosync3 problem, but logs are not really >> verbose at the time of reboot... >> >> Is there anything else to test ? >> >> Regards, >> Herv? >> >> Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit : >>> Hi, >>> >>> a patch is available in pvetest >>> >>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >>> >>> >>> can you test it ? >>> >>> (you need to restart corosync after install of the deb) >>> >>> >>> ----- Mail original ----- >>> De: "Laurent CARON" >>> ?: "proxmoxve" >>> Envoy?: Lundi 16 Septembre 2019 09:55:34 >>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >>> >>> Hi, >>> >>> >>> After upgrading our 4 node cluster from PVE 5 to 6, we experience >>> constant crashed (once every 2 days). >>> >>> Those crashes seem related to corosync. >>> >>> Since numerous users are reporting sych issues (broken cluster after >>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >>> corosync to version 2.4.4 without impacting functionnality ? >>> >>> Basic steps would be: >>> >>> On all nodes >>> >>> # systemctl stop pve-ha-lrm >>> >>> Once done, on all nodes: >>> >>> # systemctl stop pve-ha-crm >>> >>> Once done, on all nodes: >>> >>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >>> >>> Then, once corosync has been downgraded, on all nodes >>> >>> # systemctl start pve-ha-lrm >>> # systemctl start pve-ha-crm >>> >>> Would that work ? >>> >>> Thanks >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Fri Nov 22 07:35:31 2019 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Fri, 22 Nov 2019 07:35:31 +0100 (CET) Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: <76b79e7e-4141-329b-8330-c82fe8ac8335@ias.u-psud.fr> References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> <76b79e7e-4141-329b-8330-c82fe8ac8335@ias.u-psud.fr> Message-ID: <1295115918.1855441.1574404531828.JavaMail.zimbra@odiso.com> >>Anyway, just for my education, is there someone here who can explain >>shortly what was the problem (unicast management ?) or who have a good >>link regarding this "behavior" ? Thanks ! It was multiple bugs in corosync3 (difficult to explain, it was very difficult to debug) here more details: https://github.com/kronosnet/kronosnet/commit/f1a5de2141a73716c09566f294e3873add5c3ff3 https://github.com/kronosnet/kronosnet/commit/1338058fa634b08eee7099c0614e8076267501ff ----- Mail original ----- De: "Herv? Ballans" ?: "proxmoxve" Envoy?: Mercredi 20 Novembre 2019 16:12:53 Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 Dear all, Since we upgraded to these versions (15 days ago), we don't encounter anymore the problem :) We are still waiting a few more days to be sure of stability but looks good ! Anyway, just for my education, is there someone here who can explain shortly what was the problem (unicast management ?) or who have a good link regarding this "behavior" ? Thanks ! Cheers, rv Le 08/11/2019 ? 11:18, Alexandre DERUMIER a ?crit : > Hi, > > do you have upgrade all your nodes to > > corosync 3.0.2-pve4 > libknet1:amd64 1.13-pve1 > > > ? > > (available in pve-no-subscription et pve-enteprise repos) > > ----- Mail original ----- > De: "Eneko Lacunza" > ?: "proxmoxve" > Envoy?: Jeudi 7 Novembre 2019 15:35:38 > Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 > > Hi all, > > We updated our office cluster to get the patch, but got a node reboot on > 31th october. Node was fenced and rebooted, everything continued working OK. > > Is anyone experencing yet this problem? > > Cheers > Eneko > > El 2/10/19 a las 18:09, Herv? Ballans escribi?: >> Hi Alexandre, >> >> We encouter exactly the same problem as Laurent Caron (after upgrade >> from 5 to 6). >> >> So I tried your patch 3 days ago, but unfortunately, the problem still >> occurs... >> >> This is a really annoying problem, since sometimes, all the PVE nodes >> of our cluster reboot quasi-simultaneously ! >> And in the same time, we don't encounter this problem with our other >> PVE cluster in version 5. >> (And obviously we are waiting for a solution and a stable situation >> before upgrade it !) >> >> It seems to be a unicast or corosync3 problem, but logs are not really >> verbose at the time of reboot... >> >> Is there anything else to test ? >> >> Regards, >> Herv? >> >> Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit : >>> Hi, >>> >>> a patch is available in pvetest >>> >>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >>> >>> >>> can you test it ? >>> >>> (you need to restart corosync after install of the deb) >>> >>> >>> ----- Mail original ----- >>> De: "Laurent CARON" >>> ?: "proxmoxve" >>> Envoy?: Lundi 16 Septembre 2019 09:55:34 >>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >>> >>> Hi, >>> >>> >>> After upgrading our 4 node cluster from PVE 5 to 6, we experience >>> constant crashed (once every 2 days). >>> >>> Those crashes seem related to corosync. >>> >>> Since numerous users are reporting sych issues (broken cluster after >>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >>> corosync to version 2.4.4 without impacting functionnality ? >>> >>> Basic steps would be: >>> >>> On all nodes >>> >>> # systemctl stop pve-ha-lrm >>> >>> Once done, on all nodes: >>> >>> # systemctl stop pve-ha-crm >>> >>> Once done, on all nodes: >>> >>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >>> >>> Then, once corosync has been downgraded, on all nodes >>> >>> # systemctl start pve-ha-lrm >>> # systemctl start pve-ha-crm >>> >>> Would that work ? >>> >>> Thanks >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Fri Nov 22 09:45:55 2019 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 22 Nov 2019 09:45:55 +0100 Subject: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 In-Reply-To: <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> References: <1121750443.5445084.1568991622944.JavaMail.zimbra@odiso.com> <2d982b48-8939-4a0c-1554-78dfa9d07749@ias.u-psud.fr> <434993198.1384535.1573208338428.JavaMail.zimbra@odiso.com> Message-ID: <357de794-4039-6a48-07c8-533a139fab91@binovo.es> Hi Alexandre, Sorry for the delay getting back, I missed your reply. I see the following versions: corosync: 3.0.2-pve2 libknet1: 1.12-pve1 So no :( Was this (important?) fix announced somehow? Anyway, will upgrade the cluster ASAP, thanks a lot for the hint!! Regards Eneko El 8/11/19 a las 11:18, Alexandre DERUMIER escribi?: > Hi, > > do you have upgrade all your nodes to > > corosync 3.0.2-pve4 > libknet1:amd64 1.13-pve1 > > > ? > > (available in pve-no-subscription et pve-enteprise repos) > > ----- Mail original ----- > De: "Eneko Lacunza" > ?: "proxmoxve" > Envoy?: Jeudi 7 Novembre 2019 15:35:38 > Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 > > Hi all, > > We updated our office cluster to get the patch, but got a node reboot on > 31th october. Node was fenced and rebooted, everything continued working OK. > > Is anyone experencing yet this problem? > > Cheers > Eneko > > El 2/10/19 a las 18:09, Herv? Ballans escribi?: >> Hi Alexandre, >> >> We encouter exactly the same problem as Laurent Caron (after upgrade >> from 5 to 6). >> >> So I tried your patch 3 days ago, but unfortunately, the problem still >> occurs... >> >> This is a really annoying problem, since sometimes, all the PVE nodes >> of our cluster reboot quasi-simultaneously ! >> And in the same time, we don't encounter this problem with our other >> PVE cluster in version 5. >> (And obviously we are waiting for a solution and a stable situation >> before upgrade it !) >> >> It seems to be a unicast or corosync3 problem, but logs are not really >> verbose at the time of reboot... >> >> Is there anything else to test ? >> >> Regards, >> Herv? >> >> Le 20/09/2019 ? 17:00, Alexandre DERUMIER a ?crit : >>> Hi, >>> >>> a patch is available in pvetest >>> >>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb >>> >>> >>> can you test it ? >>> >>> (you need to restart corosync after install of the deb) >>> >>> >>> ----- Mail original ----- >>> De: "Laurent CARON" >>> ?: "proxmoxve" >>> Envoy?: Lundi 16 Septembre 2019 09:55:34 >>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 >>> >>> Hi, >>> >>> >>> After upgrading our 4 node cluster from PVE 5 to 6, we experience >>> constant crashed (once every 2 days). >>> >>> Those crashes seem related to corosync. >>> >>> Since numerous users are reporting sych issues (broken cluster after >>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade >>> corosync to version 2.4.4 without impacting functionnality ? >>> >>> Basic steps would be: >>> >>> On all nodes >>> >>> # systemctl stop pve-ha-lrm >>> >>> Once done, on all nodes: >>> >>> # systemctl stop pve-ha-crm >>> >>> Once done, on all nodes: >>> >>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 >>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 >>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 >>> >>> Then, once corosync has been downgraded, on all nodes >>> >>> # systemctl start pve-ha-lrm >>> # systemctl start pve-ha-crm >>> >>> Would that work ? >>> >>> Thanks >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From arjenvanweelden at gmail.com Sat Nov 23 09:29:54 2019 From: arjenvanweelden at gmail.com (arjenvanweelden at gmail.com) Date: Sat, 23 Nov 2019 09:29:54 +0100 Subject: [PVE-User] multi-function devices, webGUI and fix #2436 Message-ID: Hi, Yesterday evening, I was surprised by the same PCI passthrough issue as described in https://forum.proxmox.com/threads/pci-passthrough-not-working-after-update.60580/ . A VM failed to start with the error "no pci device info for device '00:xy.0'", while working fine before the apt-get dist-upgrade and reboot. Once it was clear what the issue was, it was easily resolved by adding 0000: to the hostpci entries. Unfortunately, the Help button/documentation does not mention this. This issue did not occur for multi-function devices. Also, when using the webUI and enabling "All Functions" for the device, the setting is changed from "hostpci0: 0000:00:xy.0" to "hostpci0: 00.xy". Changing it the other way around (disable multi-function in webUI) does not add the required "0000:", which will fail at the next VM start. Is this an intended difference or is it an oversight that will change (unexpectedly) in the future? As always, thank you for all the hard work put into Proxmox VE! kind regards, Arjen From t.lamprecht at proxmox.com Sat Nov 23 10:19:35 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 23 Nov 2019 10:19:35 +0100 Subject: [PVE-User] multi-function devices, webGUI and fix #2436 In-Reply-To: References: Message-ID: <2c610027-ce35-a9cf-8bd7-522c21a911b5@proxmox.com> Hi, On 11/23/19 9:29 AM, arjenvanweelden at gmail.com wrote: > Hi, > > Yesterday evening, I was surprised by the same PCI passthrough issue as > described in > https://forum.proxmox.com/threads/pci-passthrough-not-working-after-update.60580/ > . A VM failed to start with the error "no pci device info for > device '00:xy.0'", while working fine before the apt-get dist-upgrade > and reboot. > Once it was clear what the issue was, it was easily resolved by adding > 0000: to the hostpci entries. > Unfortunately, the Help button/documentation does not mention this. > > This issue did not occur for multi-function devices. Also, when using > the webUI and enabling "All Functions" for the device, the setting is > changed from "hostpci0: 0000:00:xy.0" to "hostpci0: 00.xy". > Changing it the other way around (disable multi-function in webUI) does > not add the required "0000:", which will fail at the next VM start. > Is this an intended difference or is it an oversight that will change > (unexpectedly) in the future? > We had a fix which allowed to use PCI domains other than "0000", while not common we had people report the need. This fix seemed to made some unintended fallout.. I just uploaded qemu-server in version 6.0-17 with a small fix for it, I'll check around a bit and if it seems got it may get to no-subscription soon. Thanks for reporting! cheers, Thomas From kai at zimmer.net Mon Nov 25 06:14:15 2019 From: kai at zimmer.net (kai at zimmer.net) Date: Mon, 25 Nov 2019 06:14:15 +0100 Subject: [PVE-User] proxmox-ve: 5.4-2 can't access webinterface after update Message-ID: <851906c6c6e5abf82add927f7e2582e4@zimmer.net> Hi, i updated my Proxmox VE Host (via 'apt-get update; apt-get upgrade'). Now the web interface is not accessible anymore. Additionally i have disconnected network devices in the webinterface before the update. I can start the VMs via 'qm start 100', but they cannot get network (and i don't know how to fix this on the command line). To fix the web interface i tried 'service pveproxy restart' and 'service pvedaemon restart' - with no success. To fix the network in the VMs i tried 'qm set 100 --net0 model=virtio,link_down=1' and 'qm set 100 --net0 model=virtio,link_down=0' - also without success. proxmox-ve: 5.4-2 (running kernel: 4.15.18-7-pve) pve-manager: 5.2-10 (running version: 5.2-10/6f892b40) pve-kernel-4.15: 5.2-10 pve-kernel-4.13: 5.2-2 pve-kernel-4.15.18-7-pve: 4.15.18-27 pve-kernel-4.15.18-4-pve: 4.15.18-23 pve-kernel-4.13.16-4-pve: 4.13.16-51 pve-kernel-4.13.16-1-pve: 4.13.16-46 pve-kernel-4.13.13-4-pve: 4.13.13-35 pve-kernel-4.4.98-3-pve: 4.4.98-103 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs: 6.0.1-2 libpve-access-control: 5.0-8 libpve-apiclient-perl: 2.0-5 libpve-common-perl: 5.0-41 libpve-guest-common-perl: 2.0-18 libpve-http-server-perl: 2.0-14 libpve-storage-perl: 5.0-30 libqb0: 1.0.3-1~bpo9 lvm2: 2.02.168-pve6 lxc-pve: 3.1.0-7 lxcfs: 3.0.3-pve1 novnc-pve: 1.0.0-3 proxmox-widget-toolkit: 1.0-28 pve-cluster: 5.0-38 pve-container: 2.0-29 pve-docs: 5.4-2 pve-firewall: 3.0-22 pve-firmware: 2.0-7 pve-ha-manager: 2.0-9 pve-i18n: 1.1-4 pve-libspice-server1: 0.14.1-2 pve-qemu-kvm: 3.0.1-4 pve-xtermjs: 3.12.0-1 qemu-server: 5.0-38 smartmontools: 6.5+svn4324-1 spiceterm: 3.0-5 vncterm: 1.5-3 # service pveproxy status ? pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-11-25 05:10:18 CET; 24min ago Process: 7797 ExecStop=/usr/bin/pveproxy stop (code=exited, status=0/SUCCESS) Process: 7803 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS) Main PID: 7830 (pveproxy) Tasks: 4 (limit: 19660) Memory: 123.4M CPU: 2.107s CGroup: /system.slice/pveproxy.service ??7830 pveproxy ??7833 pveproxy worker ??7834 pveproxy worker ??7835 pveproxy worker Nov 25 05:31:36 holodoc pveproxy[7835]: Can't locate object method "set_request_host" via package "PVE::RPCEnvironment" at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1206. Nov 25 05:31:41 holodoc pveproxy[7835]: problem with client 192.168.1.86; Connection timed out # service pvedaemon status ? pvedaemon.service - PVE API Daemon Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-11-25 05:11:16 CET; 25min ago Process: 7991 ExecStop=/usr/bin/pvedaemon stop (code=exited, status=0/SUCCESS) Process: 8001 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS) Main PID: 8021 (pvedaemon) Tasks: 4 (limit: 19660) Memory: 114.8M CPU: 1.791s CGroup: /system.slice/pvedaemon.service ??8021 pvedaemon ??8024 pvedaemon worker ??8025 pvedaemon worker ??8026 pvedaemon worker Nov 25 05:11:15 holodoc systemd[1]: Starting PVE API Daemon... Nov 25 05:11:16 holodoc pvedaemon[8021]: starting server Nov 25 05:11:16 holodoc pvedaemon[8021]: starting 3 worker(s) Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8024 started Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8025 started Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8026 started Nov 25 05:11:16 holodoc systemd[1]: Started PVE API Daemon. Any ideas how to fix this? Best regards, Kai From t.lamprecht at proxmox.com Mon Nov 25 09:08:36 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 25 Nov 2019 09:08:36 +0100 Subject: [PVE-User] proxmox-ve: 5.4-2 can't access webinterface after update In-Reply-To: <851906c6c6e5abf82add927f7e2582e4@zimmer.net> References: <851906c6c6e5abf82add927f7e2582e4@zimmer.net> Message-ID: <599c6bb5-9161-b0ff-6a55-30e6d72683c1@proxmox.com> Hi, On 11/25/19 6:14 AM, kai at zimmer.net wrote: > Hi, > i updated my Proxmox VE Host (via 'apt-get update; apt-get upgrade'). That's the wrong way to upgrade a Proxmox VE host[0] and is probably the cause for your problems. Use apt-get update apt-get dist-upgrade or the more modern interface: apt update apt full-upgrade [0]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_system_software_updates > Now the web interface is not accessible anymore. Additionally i have > disconnected network devices in the webinterface before the update. I > can start the VMs via 'qm start 100', but they cannot get network (and i > don't know how to fix this on the command line). > To fix the web interface i tried 'service pveproxy restart' and 'service > pvedaemon restart' - with no success. > To fix the network in the VMs i tried 'qm set 100 --net0 > model=virtio,link_down=1' and 'qm set 100 --net0 > model=virtio,link_down=0' - also without success. > proxmox-ve: 5.4-2 (running kernel: 4.15.18-7-pve) Proxmox VE 5.4-2 is latest from this year, ok, but > pve-manager: 5.2-10 (running version: 5.2-10/6f892b40) this manager version is from fall 2018, and highly probable incompatible with the rest of the stack - other packages may have this issue too.. > pve-kernel-4.15: 5.2-10 > pve-kernel-4.13: 5.2-2 > pve-kernel-4.15.18-7-pve: 4.15.18-27 > pve-kernel-4.15.18-4-pve: 4.15.18-23 > pve-kernel-4.13.16-4-pve: 4.13.16-51 > pve-kernel-4.13.16-1-pve: 4.13.16-46 > pve-kernel-4.13.13-4-pve: 4.13.13-35 > pve-kernel-4.4.98-3-pve: 4.4.98-103 > corosync: 2.4.4-pve1 > criu: 2.11.1-1~bpo90 > glusterfs-client: 3.8.8-1 > ksm-control-daemon: 1.2-2 > libjs-extjs: 6.0.1-2 > libpve-access-control: 5.0-8 > libpve-apiclient-perl: 2.0-5 > libpve-common-perl: 5.0-41 > libpve-guest-common-perl: 2.0-18 > libpve-http-server-perl: 2.0-14 > libpve-storage-perl: 5.0-30 > libqb0: 1.0.3-1~bpo9 > lvm2: 2.02.168-pve6 > lxc-pve: 3.1.0-7 > lxcfs: 3.0.3-pve1 > novnc-pve: 1.0.0-3 > proxmox-widget-toolkit: 1.0-28 > pve-cluster: 5.0-38 > pve-container: 2.0-29 > pve-docs: 5.4-2 > pve-firewall: 3.0-22 > pve-firmware: 2.0-7 > pve-ha-manager: 2.0-9 > pve-i18n: 1.1-4 > pve-libspice-server1: 0.14.1-2 > pve-qemu-kvm: 3.0.1-4 > pve-xtermjs: 3.12.0-1 > qemu-server: 5.0-38 > smartmontools: 6.5+svn4324-1 > spiceterm: 3.0-5 > vncterm: 1.5-3 > # service pveproxy status > ? pveproxy.service - PVE API Proxy Server > Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor > preset: enabled) > Active: active (running) since Mon 2019-11-25 05:10:18 CET; 24min ago > Process: 7797 ExecStop=/usr/bin/pveproxy stop (code=exited, > status=0/SUCCESS) > Process: 7803 ExecStart=/usr/bin/pveproxy start (code=exited, > status=0/SUCCESS) > Main PID: 7830 (pveproxy) > Tasks: 4 (limit: 19660) > Memory: 123.4M > CPU: 2.107s > CGroup: /system.slice/pveproxy.service > ??7830 pveproxy > ??7833 pveproxy worker > ??7834 pveproxy worker > ??7835 pveproxy worker > Nov 25 05:31:36 holodoc pveproxy[7835]: Can't locate object method > "set_request_host" via package "PVE::RPCEnvironment" at > /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1206. > Nov 25 05:31:41 holodoc pveproxy[7835]: problem with client > 192.168.1.86; Connection timed out > # service pvedaemon status > ? pvedaemon.service - PVE API Daemon > Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor > preset: enabled) > Active: active (running) since Mon 2019-11-25 05:11:16 CET; 25min ago > Process: 7991 ExecStop=/usr/bin/pvedaemon stop (code=exited, > status=0/SUCCESS) > Process: 8001 ExecStart=/usr/bin/pvedaemon start (code=exited, > status=0/SUCCESS) > Main PID: 8021 (pvedaemon) > Tasks: 4 (limit: 19660) > Memory: 114.8M > CPU: 1.791s > CGroup: /system.slice/pvedaemon.service > ??8021 pvedaemon > ??8024 pvedaemon worker > ??8025 pvedaemon worker > ??8026 pvedaemon worker > Nov 25 05:11:15 holodoc systemd[1]: Starting PVE API Daemon... > Nov 25 05:11:16 holodoc pvedaemon[8021]: starting server > Nov 25 05:11:16 holodoc pvedaemon[8021]: starting 3 worker(s) > Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8024 started > Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8025 started > Nov 25 05:11:16 holodoc pvedaemon[8021]: worker 8026 started > Nov 25 05:11:16 holodoc systemd[1]: Started PVE API Daemon. > Any ideas how to fix this? > Best regards, > Kai > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Wed Nov 27 14:29:27 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 27 Nov 2019 10:29:27 -0300 Subject: [PVE-User] ZFS rpool grub rescue boot... Message-ID: Hi there I just installed Proxmox 6 in an HPE server, which has HP Smart Array P420i and, unfortunately, this not so Smart Array doesn't give me the options to make non-raid or HBA/IT Mode... The Proxmox installer ran smoothly, but when try to boot, get this error unknow device Somebody can help with this issue?? Thanks a lot --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From gaio at sv.lnf.it Wed Nov 27 14:47:05 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 27 Nov 2019 14:47:05 +0100 Subject: [PVE-User] ZFS rpool grub rescue boot... In-Reply-To: References: Message-ID: <20191127134705.GH3123@sv.lnf.it> Mandi! Gilberto Nunes In chel di` si favelave... > I just installed Proxmox 6 in an HPE server, which has HP Smart Array P420i > and, unfortunately, this not so Smart Array doesn't give me the options to > make non-raid or HBA/IT Mode... > The Proxmox installer ran smoothly, but when try to boot, get this error I use that controller, but not with ZFS. Anyway seems that IS possible tu put the controller in HBA mode, see: https://www.youtube.com/watch?v=JuaezJd4C3I Probably you have the same effect: a) upgrading to the latest bios b) using hpssacli from a temp-installed linux distro, eg a USB key or an usb disk. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From devzero at web.de Wed Nov 27 15:00:45 2019 From: devzero at web.de (Roland @web.de) Date: Wed, 27 Nov 2019 15:00:45 +0100 Subject: [PVE-User] question regarding colum in disks view / passtrough disk Message-ID: <720589e1-65e6-fc73-0ec7-aeec5bc2c724@web.de> Hello, in Datacenter->Nodename->Disks? there is a column "Usage" which shows what filesystem is being used for the disk. I have 2 System disks (ssd) which contain the proxmox system and they are being used as ZFS mirror, i.e. i can put virtual machines on rpool/data The other harddisks (ordinary large sata disks) are being used as passtrough devices, i.e. have added them with command like qm set 100 -scsi5 /dev/disk/by-id/ata-HGST_HUH721212ALE600_AAGVE62H as raw disks to virtual machines, i.e. they are used from a single virtual machine. From the Disk View in Webgui, you cannot differ between thos, i.e. they simply look "the same" from a hosts management perspective. Wouldn't it make sense to make a difference in the Webgui when a disk containing filesystem/data which is (and should) not being accessed on the host/hypervisor level ? I would feel much better if proxmox knew some "the host OS should not touch this disk at all" flag and if it would have an understanding of "this is a disk i (can) use" and "this is a disk i can't/should not use" regards Roland From gilberto.nunes32 at gmail.com Wed Nov 27 15:01:52 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 27 Nov 2019 11:01:52 -0300 Subject: [PVE-User] ZFS rpool grub rescue boot... In-Reply-To: <20191127134705.GH3123@sv.lnf.it> References: <20191127134705.GH3123@sv.lnf.it> Message-ID: That's nice Marco! Thanks for reply! Cheers. --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 27 de nov. de 2019 ?s 10:47, Marco Gaiarin escreveu: > Mandi! Gilberto Nunes > In chel di` si favelave... > > > I just installed Proxmox 6 in an HPE server, which has HP Smart Array > P420i > > and, unfortunately, this not so Smart Array doesn't give me the options > to > > make non-raid or HBA/IT Mode... > > The Proxmox installer ran smoothly, but when try to boot, get this error > > I use that controller, but not with ZFS. > > Anyway seems that IS possible tu put the controller in HBA mode, see: > > https://www.youtube.com/watch?v=JuaezJd4C3I > > Probably you have the same effect: > > a) upgrading to the latest bios > > b) using hpssacli from a temp-installed linux distro, eg a USB key or > an usb disk. > > -- > dott. Marco Gaiarin GNUPG Key ID: > 240A3D66 > Associazione ``La Nostra Famiglia'' > http://www.lanostrafamiglia.it/ > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento > (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f > +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From news at fladi.de Wed Nov 27 15:20:02 2019 From: news at fladi.de (Tim Duelken) Date: Wed, 27 Nov 2019 15:20:02 +0100 Subject: [PVE-User] ZFS rpool grub rescue boot... In-Reply-To: References: Message-ID: Hi Gilberto, > I just installed Proxmox 6 in an HPE server, which has HP Smart Array P420i > and, unfortunately, this not so Smart Array doesn't give me the options to > make non-raid or HBA/IT Mode... > The Proxmox installer ran smoothly, but when try to boot, get this error > > unknow device > > Somebody can help with this issue?? We use this controller with ZFS. You can put it into HBA-Mode. See here: https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/ BUT - you can?t boot from any disk this controller is attached to. br Tim From gilberto.nunes32 at gmail.com Wed Nov 27 15:43:27 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 27 Nov 2019 11:43:27 -0300 Subject: [PVE-User] ZFS rpool grub rescue boot... In-Reply-To: References: Message-ID: Hi... yes! That's great, but unfortunately I do not have access to update the firmware... It's a cloud dedicated server! But I am glad to know that is possible to change the raid mode... Thanks a lot --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 27 de nov. de 2019 ?s 11:20, Tim Duelken escreveu: > Hi Gilberto, > > > I just installed Proxmox 6 in an HPE server, which has HP Smart Array > P420i > > and, unfortunately, this not so Smart Array doesn't give me the options > to > > make non-raid or HBA/IT Mode... > > The Proxmox installer ran smoothly, but when try to boot, get this error > > > > unknow device > > > > Somebody can help with this issue?? > > We use this controller with ZFS. You can put it into HBA-Mode. See here: > https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/ > < > https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/ > > > > BUT - you can?t boot from any disk this controller is attached to. > > br > Tim > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Wed Nov 27 17:30:42 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 27 Nov 2019 13:30:42 -0300 Subject: [PVE-User] ZFS rpool grub rescue boot... In-Reply-To: References: Message-ID: Question: I wonder if could I use hpssacli to update firmware update. Perhaps this can works, don't you agree? --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 27 de nov. de 2019 ?s 11:43, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Hi... yes! That's great, but unfortunately I do not have access to update > the firmware... It's a cloud dedicated server! > But I am glad to know that is possible to change the raid mode... > > Thanks a lot > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qua., 27 de nov. de 2019 ?s 11:20, Tim Duelken > escreveu: > >> Hi Gilberto, >> >> > I just installed Proxmox 6 in an HPE server, which has HP Smart Array >> P420i >> > and, unfortunately, this not so Smart Array doesn't give me the options >> to >> > make non-raid or HBA/IT Mode... >> > The Proxmox installer ran smoothly, but when try to boot, get this error >> > >> > unknow device >> > >> > Somebody can help with this issue?? >> >> We use this controller with ZFS. You can put it into HBA-Mode. See here: >> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/ >> < >> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/ >> > >> >> BUT - you can?t boot from any disk this controller is attached to. >> >> br >> Tim >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > From d.csapak at proxmox.com Thu Nov 28 09:49:41 2019 From: d.csapak at proxmox.com (Dominik Csapak) Date: Thu, 28 Nov 2019 09:49:41 +0100 Subject: [PVE-User] question regarding colum in disks view / passtrough disk In-Reply-To: <720589e1-65e6-fc73-0ec7-aeec5bc2c724@web.de> References: <720589e1-65e6-fc73-0ec7-aeec5bc2c724@web.de> Message-ID: <07e59372-4b02-2172-f856-238b6f4384d0@proxmox.com> On 11/27/19 3:00 PM, Roland @web.de wrote: > Hello, Hi, > > in Datacenter->Nodename->Disks? there is a column "Usage" which shows > what filesystem is being used for the disk. > > I have 2 System disks (ssd) which contain the proxmox system and they > are being used as ZFS mirror, i.e. i can put virtual machines on rpool/data > > The other harddisks (ordinary large sata disks) are being used as > passtrough devices, i.e. have added them with command like > > qm set 100 -scsi5 /dev/disk/by-id/ata-HGST_HUH721212ALE600_AAGVE62H > > as raw disks to virtual machines, i.e. they are used from a single > virtual machine. > > From the Disk View in Webgui, you cannot differ between thos, i.e. they > simply look "the same" from a hosts management perspective. > > Wouldn't it make sense to make a difference in the Webgui when a disk > containing filesystem/data which is (and should) not being accessed on > the host/hypervisor level ? In general i agree with you that this would be nice. The problem here is that during the disk enumeration, we do not touch vm configs (also i am not even sure if we could do it that easily because of package dependency chains) and thus have no information which disk is used by vms > > I would feel much better if proxmox knew some "the host OS should not > touch this disk at all" flag and if it would have an understanding of > "this is a disk i (can) use" and "this is a disk i can't/should not use" if a disk is not used by a mountpoint/zfs/lvm/storage definition/etc. it will not be touched by pve (normally) so this is only a 'cosmetic' issue passing through a disk to a vm is always a very advanced feature that can be very dangerous (thus it is not exposed in the web interface) so the admin should already know what hes doing... > > regards > Roland > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From mark at tuxis.nl Thu Nov 28 14:55:59 2019 From: mark at tuxis.nl (Mark Schouten) Date: Thu, 28 Nov 2019 13:55:59 +0000 Subject: [PVE-User] Images on CephFS? In-Reply-To: References: Message-ID: Yes, this works. I've created bug 2490 for this. https://bugzilla.proxmox.com/show_bug.cgi?id=2490 -- Mark Schouten Tuxis B.V. https://www.tuxis.nl/ | +31 318 200208 ------ Original Message ------ From: "Marco M. Gabriel" To: "Mark Schouten" Cc: "PVE User List" Sent: 9/25/2019 4:49:51 PM Subject: Re: [PVE-User] Images on CephFS? >Hi Mark, > >as a temporary fix, you could just add a "directory" based storage >that points to the CephFS mount point. > >Marco > >Am Mi., 25. Sept. 2019 um 15:49 Uhr schrieb Mark Schouten >: >> >>Hi, >> >>Just noticed that this is not a PVE 6-change. It's also changed in >>5.4-3. We're using this actively, which makes me wonder what will >>happen >>if we stop/start a VM using disks on CephFS... >> >>Any way we can enable it again? >> >>-- >>Mark Schouten >>Tuxis B.V. >>https://www.tuxis.nl/ | +31 318 200208 >> >>------ Original Message ------ >>From: "Mark Schouten" >>To: "PVE User List" >>Sent: 9/19/2019 9:15:17 AM >>Subject: [PVE-User] Images on CephFS? >> >> > >> >Hi, >> > >> >We just built our latest cluster with PVE 6.0. We also offer CephFS >> >'slow but large' storage with our clusters, on which people can >>create >> >images for backupservers. However, it seems that in PVE 6.0, we can >>no >> >longer use CephFS for images? >> > >> > >> >Cany anybody confirm (and explain?) or am I looking in the wrong >> >direction? >> > >> >-- >> >Mark Schouten >> > >> >Tuxis, Ede, https://www.tuxis.nl >> > >> >T: +31 318 200208 >> > >> > >> >_______________________________________________ >> >pve-user mailing list >> >pve-user at pve.proxmox.com >> >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >>_______________________________________________ >>pve-user mailing list >>pve-user at pve.proxmox.com >>https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From devzero at web.de Thu Nov 28 18:47:10 2019 From: devzero at web.de (Roland @web.de) Date: Thu, 28 Nov 2019 18:47:10 +0100 Subject: [PVE-User] question regarding colum in disks view / passtrough disk In-Reply-To: <07e59372-4b02-2172-f856-238b6f4384d0@proxmox.com> References: <720589e1-65e6-fc73-0ec7-aeec5bc2c724@web.de> <07e59372-4b02-2172-f856-238b6f4384d0@proxmox.com> Message-ID: thanks for commenting, >The problem here is that during the disk enumeration, we do not touch >vm configs (also i am not even sure if we could do it that easily >because of package dependency chains) and thus have no information >which disk is used by vms what about some simple blacklisting mechanism , i.e. some config file where i can tell proxmox: "don't touch/honor this special disk(s) and give it some special marker/flag in the gui" ? i think this could be easily implemented and be very effective/valuable >passing through a disk to a vm is always a very advanced feature that yes, but while neither with xenserver nor with vmware local disks/storage is a typical/wanted feature (the vendors don't really "support" local storage in sane way, especially not with passtrough) i think it's one of the strengths of proxmox and where it's good at and where it shines. having local storage zfs as an option and make proper use of local storage via zfs or lvm is one of the main reasons , why like proxmox so much. regards Roland Am 28.11.19 um 09:49 schrieb Dominik Csapak: > On 11/27/19 3:00 PM, Roland @web.de wrote: >> Hello, > > Hi, > >> >> in Datacenter->Nodename->Disks? there is a column "Usage" which shows >> what filesystem is being used for the disk. >> >> I have 2 System disks (ssd) which contain the proxmox system and they >> are being used as ZFS mirror, i.e. i can put virtual machines on >> rpool/data >> >> The other harddisks (ordinary large sata disks) are being used as >> passtrough devices, i.e. have added them with command like >> >> qm set 100 -scsi5 /dev/disk/by-id/ata-HGST_HUH721212ALE600_AAGVE62H >> >> as raw disks to virtual machines, i.e. they are used from a single >> virtual machine. >> >> ?From the Disk View in Webgui, you cannot differ between thos, i.e. they >> simply look "the same" from a hosts management perspective. >> >> Wouldn't it make sense to make a difference in the Webgui when a disk >> containing filesystem/data which is (and should) not being accessed on >> the host/hypervisor level ? > > In general i agree with you that this would be nice. > The problem here is that during the disk enumeration, we do not touch > vm configs (also i am not even sure if we could do it that easily > because of package dependency chains) and thus have no information > which disk is used by vms > >> >> I would feel much better if proxmox knew some "the host OS should not >> touch this disk at all" flag and if it would have an understanding of >> "this is a disk i (can) use" and "this is a disk i can't/should not use" > > if a disk is not used by a mountpoint/zfs/lvm/storage definition/etc. > it will not be touched by pve (normally) so this is only a 'cosmetic' > issue > > passing through a disk to a vm is always a very advanced feature that > can be very dangerous (thus it is not exposed in the web interface) > so the admin should already know what hes doing... > >> >> regards >> Roland >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user