From f.ebner at proxmox.com Mon Jan 3 11:23:55 2022 From: f.ebner at proxmox.com (Fabian Ebner) Date: Mon, 3 Jan 2022 11:23:55 +0100 Subject: [PVE-User] (Bug) pve-zsync tpm device missing In-Reply-To: References: Message-ID: Hi, and thanks for the report! A patch that should fix the issue is available on the pve-devel mailing list: https://lists.proxmox.com/pipermail/pve-devel/2022-January/051324.html Am 12/18/21 um 20:34 schrieb Dan via pve-user: > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From iztok.gregori at elettra.eu Wed Jan 5 09:01:30 2022 From: iztok.gregori at elettra.eu (iztok Gregori) Date: Wed, 5 Jan 2022 09:01:30 +0100 Subject: [PVE-User] qemu-server / qmp issues: One VM cannot complete backup and freezes Message-ID: <0fa9e157-3d2b-a133-6faa-eb4876e214e3@elettra.eu> Hi to all! Starting from one week, when we added new nodes to the cluster and upgrade all to the latest proxmox 6.4 (with the ultimate goal to upgrade all the nodes to 7.1 in the not-so-near future), *one* of the VM stopped to backup. The backup job was blocked, and once we manually terminated the VM freezed, only a hard poweroff/poweron resumed the VM. In the logs we have a lot of the following: VM 0000 qmp command failed - VM 0000 qmp command 'query-proxmox-support' failed - unable to connect to VM 0000 qmp socket - timeout after 31 retries I searched for it and I found multiple threads on the forum so, in some form, is a known issue, but I'm curious what was the trigger and what could we do to work-around that problem (apart upgrade to PVE 7.1 which we will, but not this week). Can you give me some advice? To summarize the work we did last week (from when the backup stopped working): - Did full upgrade on all the cluster nodes and reboot them. - Upgrade CEPH from Nautilus to Octopus. - Install new CEPH OSDs on the new nodes (8 out of 16). The problematic VM was running (when it wasn't problematic) on one node which (at that moment) wasn't part of the CEPH cluster (but the storage was, and still is, allways CEPH). We migrated it on a different node but had the same issues. The VM has 12 RBD disk (which is a lot more that the cluster average) and all the disks are backupped on a NFS share. Because the problem is *only* on that particular VM, I could split it in 2 VMs and rearrange the number of disks (to be more in line with the cluster average), or I could rush to upgrade to 7.1 (hopping that the problem is only on 6.4 PVE...). Here is the conf: > agent: 1 > bootdisk: virtio0 > cores: 4 > ide2: none,media=cdrom > memory: 4096 > name: problematic-vm > net0: virtio=A2:69:F4:8C:38:22,bridge=vmbr0,tag=000 > numa: 0 > onboot: 1 > ostype: l26 > scsihw: virtio-scsi-pci > smbios1: uuid=8bd477be-69ac-4b51-9c5a-a149f96da521 > sockets: 1 > virtio0: rbd_vm:vm-1043-disk-0,size=8G > virtio1: rbd_vm:vm-1043-disk-1,size=100G > virtio10: rbd_vm:vm-1043-disk-10,size=30G > virtio11: rbd_vm:vm-1043-disk-11,size=100G > virtio12: rbd_vm:vm-1043-disk-12,size=200G > virtio2: rbd_vm:vm-1043-disk-2,size=100G > virtio3: rbd_vm:vm-1043-disk-3,size=20G > virtio4: rbd_vm:vm-1043-disk-4,size=20G > virtio5: rbd_vm:vm-1043-disk-5,size=30G > virtio6: rbd_vm:vm-1043-disk-6,size=100G > virtio7: rbd_vm:vm-1043-disk-7,size=200G > virtio8: rbd_vm:vm-1043-disk-8,size=20G > virtio9: rbd_vm:vm-1043-disk-9,size=20G The VM is a CENTOS 7 NFS server. The CEPH cluster health is OK: > cluster: > id: 645e8181-8424-41c4-9bc9-7e37b740e9a9 > health: HEALTH_OK > > services: > mon: 5 daemons, quorum node-01,node-02,node-03,node-05,node-07 (age 8d) > mgr: node-01(active, since 8d), standbys: node-03, node-02, node-07, node-05 > osd: 120 osds: 120 up (since 6d), 120 in (since 6d) > > task status: > > data: > pools: 3 pools, 1057 pgs > objects: 4.65M objects, 17 TiB > usage: 67 TiB used, 139 TiB / 207 TiB avail > pgs: 1056 active+clean > 1 active+clean+scrubbing+deep > All of the nodes have the same PVE version: > proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve) > pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) > pve-kernel-5.4: 6.4-11 > pve-kernel-helper: 6.4-11 > pve-kernel-5.4.157-1-pve: 5.4.157-1 > pve-kernel-5.4.140-1-pve: 5.4.140-1 > pve-kernel-5.4.106-1-pve: 5.4.106-1 > ceph: 15.2.15-pve1~bpo10 > ceph-fuse: 15.2.15-pve1~bpo10 > corosync: 3.1.5-pve2~bpo10+1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: 0.8.35+pve1 > ksm-control-daemon: 1.3-1 > libjs-extjs: 6.0.1-10 > libknet1: 1.22-pve2~bpo10+1 > libproxmox-acme-perl: 1.1.0 > libproxmox-backup-qemu0: 1.1.0-1 > libpve-access-control: 6.4-3 > libpve-apiclient-perl: 3.1-3 > libpve-common-perl: 6.4-4 > libpve-guest-common-perl: 3.1-5 > libpve-http-server-perl: 3.2-3 > libpve-storage-perl: 6.4-1 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 4.0.6-2 > lxcfs: 4.0.6-pve1 > novnc-pve: 1.1.0-1 > proxmox-backup-client: 1.1.13-2 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.6-1 > pve-cluster: 6.4-1 > pve-container: 3.3-6 > pve-docs: 6.4-2 > pve-edk2-firmware: 2.20200531-1 > pve-firewall: 4.1-4 > pve-firmware: 3.3-2 > pve-ha-manager: 3.1-1 > pve-i18n: 2.3-1 > pve-qemu-kvm: 5.2.0-6 > pve-xtermjs: 4.7.0-3 > qemu-server: 6.4-2 > smartmontools: 7.2-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-2 > zfsutils-linux: 2.0.6-pve1~bpo10+1 I can provide more informations if it is necessary. Cheers Iztok From nada at verdnatura.es Fri Jan 7 11:37:14 2022 From: nada at verdnatura.es (nada) Date: Fri, 07 Jan 2022 11:37:14 +0100 Subject: [PVE-User] usrmerge Message-ID: <342affb2dccbe14dbe168475927bffdf@verdnatura.es> good day I am doing some full upgrades at containers from buster to bullseye and find out that some scripts related to journalctl cleaning should be corrected old path ... /bin/journalctl new path ... /usr/bin/journalctl reason is explained at https://www.debian.org/releases/bullseye/amd64/release-notes/ch-information.en.html#deprecated-components I've just installed&applied usrmerge at some containers but I see that some old proxmox nodes do NOT have merged /usr * do I have to install usrmerge at all proxmox nodes in cluster ? * our current PVE version is pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve) * is usrmerge applied during full upgrade from 6.4-x to 7.x or should I do it manually after full upgrade ? HAPPY New Year 2022 to everybody !!! Nada From pfrank at gmx.de Fri Jan 7 12:03:15 2022 From: pfrank at gmx.de (Petric Frank) Date: Fri, 07 Jan 2022 12:03:15 +0100 Subject: [PVE-User] dist-upgrade throws error Message-ID: <7332849.EvYhyI6sBW@main> Hello, during the upgrade this morning i get the following error message: --------------------- cut --------------------- root at proxmox:~# apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Error! Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libpve-common-perl : Breaks: qemu-server (< 7.0-19) but 7.0-13 is to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. --------------------- cut --------------------- apt update was executed before without issues. I am using the "no- subscription" repository: deb http://download.proxmox.com/debian bullseye pve-no-subscription Could the repository have a problem ? Any hints ? kind regards From pfrank at gmx.de Fri Jan 7 12:46:12 2022 From: pfrank at gmx.de (Petric Frank) Date: Fri, 07 Jan 2022 12:46:12 +0100 Subject: [PVE-User] dist-upgrade throws error In-Reply-To: <7332849.EvYhyI6sBW@main> References: <7332849.EvYhyI6sBW@main> Message-ID: <8877761.CDJkKcVGEf@main> Hallo, followup - found the pve repository wrongly configured. Now set to deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription according to https://pve.proxmox.com/wiki/Package_Repositories But no change in the error. regards Am Freitag, 7. Januar 2022, 12:03:15 CET schrieb Petric Frank: > Hello, > > during the upgrade this morning i get the following error message: > --------------------- cut --------------------- > root at proxmox:~# apt dist-upgrade > Reading package lists... Done > Building dependency tree... Done > Reading state information... Done > Calculating upgrade... Error! > Some packages could not be installed. This may mean that you have > requested an impossible situation or if you are using the unstable > distribution that some required packages have not yet been created > or been moved out of Incoming. > The following information may help to resolve the situation: > > The following packages have unmet dependencies: > libpve-common-perl : Breaks: qemu-server (< 7.0-19) but 7.0-13 is to be > installed > E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused > by held packages. > --------------------- cut --------------------- > > apt update was executed before without issues. I am using the "no- > subscription" repository: > deb http://download.proxmox.com/debian bullseye pve-no-subscription > > Could the repository have a problem ? > > Any hints ? > > kind regards > > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From pfrank at gmx.de Sat Jan 8 00:37:41 2022 From: pfrank at gmx.de (Petric Frank) Date: Sat, 08 Jan 2022 00:37:41 +0100 Subject: [PVE-User] Solved: Re: dist-upgrade throws error In-Reply-To: <8877761.CDJkKcVGEf@main> References: <7332849.EvYhyI6sBW@main> <8877761.CDJkKcVGEf@main> Message-ID: <5684758.MhkbZ0Pkbq@main> Hello, somehow there was a package on hold which blocked the upgrade. regards Am Freitag, 7. Januar 2022, 12:46:12 CET schrieb Petric Frank: > Hallo, > > followup - found the pve repository wrongly configured. Now set to > deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription > > according to https://pve.proxmox.com/wiki/Package_Repositories > > But no change in the error. > > regards > > Am Freitag, 7. Januar 2022, 12:03:15 CET schrieb Petric Frank: > > Hello, > > > > during the upgrade this morning i get the following error message: > > --------------------- cut --------------------- > > root at proxmox:~# apt dist-upgrade > > Reading package lists... Done > > Building dependency tree... Done > > Reading state information... Done > > Calculating upgrade... Error! > > Some packages could not be installed. This may mean that you have > > requested an impossible situation or if you are using the unstable > > distribution that some required packages have not yet been created > > or been moved out of Incoming. > > The following information may help to resolve the situation: > > > > The following packages have unmet dependencies: > > libpve-common-perl : Breaks: qemu-server (< 7.0-19) but 7.0-13 is to be > > > > installed > > E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused > > by held packages. > > --------------------- cut --------------------- > > > > apt update was executed before without issues. I am using the "no- > > > > subscription" repository: > > deb http://download.proxmox.com/debian bullseye pve-no-subscription > > > > Could the repository have a problem ? > > > > Any hints ? > > > > kind regards > > > > > > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From pburyk at gmail.com Sun Jan 9 20:01:06 2022 From: pburyk at gmail.com (Patrick Buryk) Date: Sun, 9 Jan 2022 14:01:06 -0500 Subject: [PVE-User] Subject: Proxmox VE 7.1-2 Installation question Message-ID: Hello, All - I'm a new user of Proxmox, having just installed VE 7.1-2 for about the 5th time trying to resolve the following issue: After a successful install, I login to my host as root. I cannot ping in or out of the host; "ip address" & "ip link" commands show that the eno1 and vmbr0 interfaces are "UP", but their states are "DOWN". Static IP address is shown on vmbr0 interface only - I would have imagined that it would have been set to the eno1 interface instead during the install but, again, this is my first exposure to this product. Connection to my switch show no activity from the host. Ethernet cable has been verified to work on another host. Any ideas?? Thanks, and best wishes to all in 2022! Cheers! Patrick pburyk at gmail.com From a.lauterer at proxmox.com Mon Jan 10 09:34:34 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 10 Jan 2022 09:34:34 +0100 Subject: [PVE-User] Subject: Proxmox VE 7.1-2 Installation question In-Reply-To: References: Message-ID: That the IP address is configured on the vmbr0 interface is normal and the basic default after a fresh installation. You can change that later if you want to place the MGMT IP somewhere else. The vmbr0 interface is needed as the "virtual switch" to which the guests will be connected to. The vmbr0 is using the physical interface as bridge port. Check out the /etc/network/interfaces file for more details. Regarding the network not working; do you have more than one NIC in that system? If so, it is likely that the installer selected the NIC that is currently not connected. You could try to just plug in the cable to the other NIC or change the /etc/network/interfaces file accordingly so that the vmbr0 will use that other NIC. Then either reboot or run `ifreload -a` to apply the network config. If that is not the case, and there is only one NIC in the system, try to run `ip l s up` and also check your kernel logs / dmesg for any messages regarding the network that might give us more information why the NICs are not up. Best regards, Aaron On 1/9/22 20:01, Patrick Buryk wrote: > Hello, All - > > I'm a new user of Proxmox, having just installed VE 7.1-2 for about the 5th > time trying to resolve the following issue: > > After a successful install, I login to my host as root. > I cannot ping in or out of the host; "ip address" & "ip link" commands show > that the eno1 and vmbr0 interfaces are "UP", but their states are "DOWN". > Static IP address is shown on vmbr0 interface only - I would have imagined > that it would have been set to the eno1 interface instead during the > install but, again, this is my first exposure to this product. Connection > to my switch show no activity from the host. Ethernet cable has been > verified to work on another host. > > Any ideas?? > > Thanks, and best wishes to all in 2022! > > Cheers! > Patrick > pburyk at gmail.com > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From martin.konold at konsec.com Mon Jan 10 10:00:50 2022 From: martin.konold at konsec.com (Konold, Martin) Date: Mon, 10 Jan 2022 10:00:50 +0100 Subject: [PVE-User] Subject: Proxmox VE 7.1-2 Installation question In-Reply-To: References: Message-ID: <9582a88b2bdba2b803dde227cef900a8@konsec.com> Hi, I made the experience that the device names did change between the installation and the first boot. You may need to adjust the device names in /etc/network/interfaces. Regards ppa. Martin Konold -- Martin Konold - Prokurist, CTO KONSEC GmbH -? make things real Amtsgericht Stuttgart, HRB 23690 Gesch?ftsf?hrer: Andreas Mack Im K?ller 3, 70794 Filderstadt, Germany Am 2022-01-10 09:34, schrieb Aaron Lauterer: > That the IP address is configured on the vmbr0 interface is normal and > the basic default after a fresh installation. You can change that > later if you want to place the MGMT IP somewhere else. > The vmbr0 interface is needed as the "virtual switch" to which the > guests will be connected to. > The vmbr0 is using the physical interface as bridge port. Check out > the /etc/network/interfaces file for more details. > > Regarding the network not working; do you have more than one NIC in > that system? > If so, it is likely that the installer selected the NIC that is > currently not connected. You could try to just plug in the cable to > the other NIC or change the /etc/network/interfaces file accordingly > so that the vmbr0 will use that other NIC. Then either reboot or run > `ifreload -a` to apply the network config. > > If that is not the case, and there is only one NIC in the system, try > to run `ip l s up` and also check your kernel logs / dmesg for > any messages regarding the network that might give us more information > why the NICs are not up. > > Best regards, > Aaron > > On 1/9/22 20:01, Patrick Buryk wrote: >> Hello, All - >> >> I'm a new user of Proxmox, having just installed VE 7.1-2 for about >> the 5th >> time trying to resolve the following issue: >> >> After a successful install, I login to my host as root. >> I cannot ping in or out of the host; "ip address" & "ip link" commands >> show >> that the eno1 and vmbr0 interfaces are "UP", but their states are >> "DOWN". >> Static IP address is shown on vmbr0 interface only - I would have >> imagined >> that it would have been set to the eno1 interface instead during the >> install but, again, this is my first exposure to this product. >> Connection >> to my switch show no activity from the host. Ethernet cable has been >> verified to work on another host. >> >> Any ideas?? >> >> Thanks, and best wishes to all in 2022! >> >> Cheers! >> Patrick >> pburyk at gmail.com >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From dziobek at hlrs.de Mon Jan 10 14:13:50 2022 From: dziobek at hlrs.de (Martin Dziobek) Date: Mon, 10 Jan 2022 14:13:50 +0100 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? Message-ID: <20220110141350.716d9727@schleppmd.hlrs.de> Hi pve-users ! Does anybody has experiences if proxmox works flawlessly to manage a large zfs volume consisting of a SAS-connected JBOD of 60 * 1TB-HDDs ? Right now, management is done with a regular Debian 11 installation, and rebooting the thing always ends up in a timeout mess at network startup, because it takes ages to enumerate all those member disks, import the zpool and export it via NFS. I am considering to install Proxmox on this server for the sole purpose of smooth startup and management operation. Might that be a stable solution ? Best regards, Martin From athompso at athompso.net Mon Jan 10 15:32:19 2022 From: athompso at athompso.net (Adam Thompson) Date: Mon, 10 Jan 2022 14:32:19 +0000 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: IMHO, you want TrueNAS, not Proxmox, to solve this problem. Ultimately, the problem is SystemD's notion of dependencies and timeouts, which Proxmox+OpenZFS still relies on. Source: I have a Debian 10 system with 29 storage devices, 24 of which are multipathed, and have had to edit & override various systemd settings to get it to boot cleanly, reliably. -Adam Get Outlook for Android ________________________________ From: pve-user on behalf of Martin Dziobek Sent: Monday, January 10, 2022 7:13:50 AM To: pve-user at pve.proxmox.com Subject: [PVE-User] Proxmox and ZFS on large JBOD ? Hi pve-users ! Does anybody has experiences if proxmox works flawlessly to manage a large zfs volume consisting of a SAS-connected JBOD of 60 * 1TB-HDDs ? Right now, management is done with a regular Debian 11 installation, and rebooting the thing always ends up in a timeout mess at network startup, because it takes ages to enumerate all those member disks, import the zpool and export it via NFS. I am considering to install Proxmox on this server for the sole purpose of smooth startup and management operation. Might that be a stable solution ? Best regards, Martin _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From christian.kraus at ckc-it.at Mon Jan 10 15:31:40 2022 From: christian.kraus at ckc-it.at (Christian Kraus) Date: Mon, 10 Jan 2022 15:31:40 +0100 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: AS4G2AHIW0xulIxm7kiSaYdfcFwo+f////8C Message-ID: <8FA82EBE-AA47-4AD7-8CF5-FBC9EA6F758C@ckc-it.at> I would give truenas scale a try for that it also is build on top of debian and is optimized for zfs storage Von meinem iPhone gesendet > Am 10.01.2022 um 14:21 schrieb Martin Dziobek : > > ?Hi pve-users ! > > Does anybody has experiences if proxmox works > flawlessly to manage a large zfs volume consisting > of a SAS-connected JBOD of 60 * 1TB-HDDs ? > > Right now, management is done with a regular > Debian 11 installation, and rebooting the thing > always ends up in a timeout mess at network startup, > because it takes ages to enumerate all those member disks, > import the zpool and export it via NFS. > > I am considering to install Proxmox on this server for the > sole purpose of smooth startup and management operation. > Might that be a stable solution ? > > Best regards, > Martin > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From athompso at athompso.net Mon Jan 10 15:32:19 2022 From: athompso at athompso.net (Adam Thompson) Date: Mon, 10 Jan 2022 14:32:19 +0000 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: IMHO, you want TrueNAS, not Proxmox, to solve this problem. Ultimately, the problem is SystemD's notion of dependencies and timeouts, which Proxmox+OpenZFS still relies on. Source: I have a Debian 10 system with 29 storage devices, 24 of which are multipathed, and have had to edit & override various systemd settings to get it to boot cleanly, reliably. -Adam Get Outlook for Android ________________________________ From: pve-user on behalf of Martin Dziobek Sent: Monday, January 10, 2022 7:13:50 AM To: pve-user at pve.proxmox.com Subject: [PVE-User] Proxmox and ZFS on large JBOD ? Hi pve-users ! Does anybody has experiences if proxmox works flawlessly to manage a large zfs volume consisting of a SAS-connected JBOD of 60 * 1TB-HDDs ? Right now, management is done with a regular Debian 11 installation, and rebooting the thing always ends up in a timeout mess at network startup, because it takes ages to enumerate all those member disks, import the zpool and export it via NFS. I am considering to install Proxmox on this server for the sole purpose of smooth startup and management operation. Might that be a stable solution ? Best regards, Martin _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From athompso at athompso.net Mon Jan 10 16:05:30 2022 From: athompso at athompso.net (Adam Thompson) Date: Mon, 10 Jan 2022 15:05:30 +0000 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: For additional clarity, I meant TrueNAS CORE, which is still based on FreeBSD and is still free as in beer. TrueNAS SCALE is Linux-based, as another commenter mentioned, but also optimized for large storage... and is also free? SCALE is the future direction, I think, while CORE is the tried and true, mature product but is now sort of deprecated? iX's messaging and branding isn't very clear on this. Either should work for you, AFAICT, a lot better than trying to use Proxmox as a NAS. FWIW, I have used TrueNAS CORE in the past (when it was still called FreeNAS) as the NFS server *for* a Proxmox install, and it was solid. -Adam -----Original Message----- From: pve-user On Behalf Of Adam Thompson Sent: Monday, January 10, 2022 8:32 AM To: Proxmox VE user list ; pve-user at pve.proxmox.com Subject: Re: [PVE-User] Proxmox and ZFS on large JBOD ? IMHO, you want TrueNAS, not Proxmox, to solve this problem. Ultimately, the problem is SystemD's notion of dependencies and timeouts, which Proxmox+OpenZFS still relies on. Source: I have a Debian 10 system with 29 storage devices, 24 of which are multipathed, and have had to edit & override various systemd settings to get it to boot cleanly, reliably. -Adam Get Outlook for Android ________________________________ From: pve-user on behalf of Martin Dziobek Sent: Monday, January 10, 2022 7:13:50 AM To: pve-user at pve.proxmox.com Subject: [PVE-User] Proxmox and ZFS on large JBOD ? Hi pve-users ! Does anybody has experiences if proxmox works flawlessly to manage a large zfs volume consisting of a SAS-connected JBOD of 60 * 1TB-HDDs ? Right now, management is done with a regular Debian 11 installation, and rebooting the thing always ends up in a timeout mess at network startup, because it takes ages to enumerate all those member disks, import the zpool and export it via NFS. I am considering to install Proxmox on this server for the sole purpose of smooth startup and management operation. Might that be a stable solution ? Best regards, Martin _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From athompso at athompso.net Mon Jan 10 16:05:30 2022 From: athompso at athompso.net (Adam Thompson) Date: Mon, 10 Jan 2022 15:05:30 +0000 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: For additional clarity, I meant TrueNAS CORE, which is still based on FreeBSD and is still free as in beer. TrueNAS SCALE is Linux-based, as another commenter mentioned, but also optimized for large storage... and is also free? SCALE is the future direction, I think, while CORE is the tried and true, mature product but is now sort of deprecated? iX's messaging and branding isn't very clear on this. Either should work for you, AFAICT, a lot better than trying to use Proxmox as a NAS. FWIW, I have used TrueNAS CORE in the past (when it was still called FreeNAS) as the NFS server *for* a Proxmox install, and it was solid. -Adam -----Original Message----- From: pve-user On Behalf Of Adam Thompson Sent: Monday, January 10, 2022 8:32 AM To: Proxmox VE user list ; pve-user at pve.proxmox.com Subject: Re: [PVE-User] Proxmox and ZFS on large JBOD ? IMHO, you want TrueNAS, not Proxmox, to solve this problem. Ultimately, the problem is SystemD's notion of dependencies and timeouts, which Proxmox+OpenZFS still relies on. Source: I have a Debian 10 system with 29 storage devices, 24 of which are multipathed, and have had to edit & override various systemd settings to get it to boot cleanly, reliably. -Adam Get Outlook for Android ________________________________ From: pve-user on behalf of Martin Dziobek Sent: Monday, January 10, 2022 7:13:50 AM To: pve-user at pve.proxmox.com Subject: [PVE-User] Proxmox and ZFS on large JBOD ? Hi pve-users ! Does anybody has experiences if proxmox works flawlessly to manage a large zfs volume consisting of a SAS-connected JBOD of 60 * 1TB-HDDs ? Right now, management is done with a regular Debian 11 installation, and rebooting the thing always ends up in a timeout mess at network startup, because it takes ages to enumerate all those member disks, import the zpool and export it via NFS. I am considering to install Proxmox on this server for the sole purpose of smooth startup and management operation. Might that be a stable solution ? Best regards, Martin _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From ralf.storm at konzept-is.de Mon Jan 10 16:59:33 2022 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Mon, 10 Jan 2022 16:59:33 +0100 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: <20080b37-49dd-cc91-ceea-874f860c22db@konzept-is.de> Hello Martin, should be no problem and you can configure wait-time at startup for ZFS to avoid timeouts, this is also described in the proxmox doku in the ZFS chapter. The doku for proxmox is very good, keep on it and you will be happy. best regards Ralf Am 10/01/2022 um 14:13 schrieb Martin Dziobek: > Hi pve-users ! > > Does anybody has experiences if proxmox works > flawlessly to manage a large zfs volume consisting > of a SAS-connected JBOD of 60 * 1TB-HDDs ? > > Right now, management is done with a regular > Debian 11 installation, and rebooting the thing > always ends up in a timeout mess at network startup, > because it takes ages to enumerate all those member disks, > import the zpool and export it via NFS. > > I am considering to install Proxmox on this server for the > sole purpose of smooth startup and management operation. > Might that be a stable solution ? > > Best regards, > Martin > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Ralf Storm Systemadministrator From kyleaschmitt at gmail.com Mon Jan 10 20:50:48 2022 From: kyleaschmitt at gmail.com (Kyle Schmitt) Date: Mon, 10 Jan 2022 13:50:48 -0600 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: I would separate it out. I'm running a moderate sized ZFS array on one box, with a small 10GB ethernet network dedicated to serving NFS from that array to my proxmox nodes. I had bonded 1G before for the same setup, and usually it was a non-issue, but sometimes there was a slowdown. With 10G no slowdown on my workloads. I haven't explored serving ZFS from linux, so mine is on FreeBSD, and it's rock solid. --Kyle On Mon, Jan 10, 2022 at 7:21 AM Martin Dziobek wrote: > > Hi pve-users ! > > Does anybody has experiences if proxmox works > flawlessly to manage a large zfs volume consisting > of a SAS-connected JBOD of 60 * 1TB-HDDs ? > > Right now, management is done with a regular > Debian 11 installation, and rebooting the thing > always ends up in a timeout mess at network startup, > because it takes ages to enumerate all those member disks, > import the zpool and export it via NFS. > > I am considering to install Proxmox on this server for the > sole purpose of smooth startup and management operation. > Might that be a stable solution ? > > Best regards, > Martin > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From kyleaschmitt at gmail.com Mon Jan 10 20:53:00 2022 From: kyleaschmitt at gmail.com (Kyle Schmitt) Date: Mon, 10 Jan 2022 13:53:00 -0600 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: Oh, and since we're throwing around NAS flavors that support ZFS, Xigma-NAS, which used to be NAS4Free, is free as in speech and beer, FreeBSD based. The UI looks a little dated, but it's very solid. I only moved away from it because I don't use any features but ZFS and NFS. --Kyle On Mon, Jan 10, 2022 at 1:50 PM Kyle Schmitt wrote: > > I would separate it out. I'm running a moderate sized ZFS array on > one box, with a small 10GB ethernet network dedicated to serving NFS > from that array to my proxmox nodes. I had bonded 1G before for the > same setup, and usually it was a non-issue, but sometimes there was a > slowdown. With 10G no slowdown on my workloads. > > I haven't explored serving ZFS from linux, so mine is on FreeBSD, and > it's rock solid. > > --Kyle > > On Mon, Jan 10, 2022 at 7:21 AM Martin Dziobek wrote: > > > > Hi pve-users ! > > > > Does anybody has experiences if proxmox works > > flawlessly to manage a large zfs volume consisting > > of a SAS-connected JBOD of 60 * 1TB-HDDs ? > > > > Right now, management is done with a regular > > Debian 11 installation, and rebooting the thing > > always ends up in a timeout mess at network startup, > > because it takes ages to enumerate all those member disks, > > import the zpool and export it via NFS. > > > > I am considering to install Proxmox on this server for the > > sole purpose of smooth startup and management operation. > > Might that be a stable solution ? > > > > Best regards, > > Martin > > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From pburyk at gmail.com Tue Jan 11 02:50:14 2022 From: pburyk at gmail.com (Patrick Buryk) Date: Mon, 10 Jan 2022 20:50:14 -0500 Subject: [PVE-User] Proxmox VE 7.1-2 Installation question - SOLVED, I think... Message-ID: As a follow-up to my recent posts regarding Network interfaces not coming UP after a successful Proxmox 7.1-2 Installation - I think I've resolved the problem. Many previous (related) posts here (and in the Proxmox Release Notes and Bug Reports) spoke about a known bug in the Intel e1000e driver and suggested some work-arounds, implemented as commands issued at the Linux Command level. Before I tried these I thought I might visit the Dell Support site just to see if there was any news there about this. What I found (to my surprise!) was a recent BIOS update (A26) for the M6800 class machine. This was put out since I updated all my drivers and BIOS for Windows 10 Pro, last year. The description of the A26 BIOS update listed a fix to the e1000e LAN interface. I reattached my Win10Pro system disk and upgraded the BIOS to A26. After removing Win10Pro, re-inserting my Proxmox system SSD and booting up ther corresponding indicator lights on my switch illuminated. I was then able to ping my network gateway from the Proxmox host and login to the Proxmox host via remote browser. A brief "tour" of the various parts of the Proxmox GUI suggest that my network issue has been resolved and I can move forward with intended undertakings. Thanks, once again, for all of the suggestions, both from the mailing-list posts and via private email. I will end in reiterating (2) basic rules of computer administration: 1) RTFM 2) Keep drivers and BIOS levels - up to date!!! VBR, Patrick Buryk From m.witte at neusta.de Wed Jan 12 14:57:51 2022 From: m.witte at neusta.de (Marco Witte) Date: Wed, 12 Jan 2022 14:57:51 +0100 Subject: [PVE-User] proxmox ceph osd option to move wal to a new device Message-ID: <2a32d0af-d323-f38d-cbf6-6a11ba16c452@neusta.de> One wal drive was failing. So I replaced it with: pveceph osd destroy 17 --cleanup 1 pveceph osd destroy 18 --cleanup 1 pveceph osd destroy 19 --cleanup 1 This removed the three disks and removed the osd-wal. The Drive sdf is the replacement for the failed wal device that above three osds ( sdb, sdc,sdd ) used: pveceph osd create /dev/sdb -wal_dev /dev/sdf pveceph osd create /dev/sdc -wal_dev /dev/sdf pveceph osd create /dev/sdd -wal_dev /dev/sdf This approach worked fine, but took a lot of time. So I figured it would be better to change the wal for the existing osd: At this state /dev/sdf is completly empty (has no lvm/wiped) and all three osd still use the failing wal device /dev/sdh. ceph-volume lvm new-wal --osd-id 17 --osd-fsid 01234567-1234-1234-123456789012 --target /dev/sdf Which obviously fails, because the target should be --target vgname/new_wal Question part: What would be the fast way to make the new device /dev/sdf the wal device, without destroying the osds 17 18 19? Versions: pve-manager/7.1-8/5b267f33 (running kernel: 5.13.19-2-pve) ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable) Thank you From elacunza at binovo.es Wed Jan 12 15:30:23 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 12 Jan 2022 15:30:23 +0100 Subject: [PVE-User] proxmox ceph osd option to move wal to a new device In-Reply-To: <2a32d0af-d323-f38d-cbf6-6a11ba16c452@neusta.de> References: <2a32d0af-d323-f38d-cbf6-6a11ba16c452@neusta.de> Message-ID: Hi Marco, El 12/1/22 a las 14:57, Marco Witte escribi?: > One wal drive was failing. So I replaced it with: > pveceph osd destroy 17 --cleanup 1 > pveceph osd destroy 18 --cleanup 1 > pveceph osd destroy 19 --cleanup 1 > > This removed the three disks and removed the osd-wal. > > The Drive sdf is the replacement for the failed wal device that above > three osds ( sdb, sdc,sdd ) used: > pveceph osd create /dev/sdb -wal_dev /dev/sdf > pveceph osd create /dev/sdc -wal_dev /dev/sdf > pveceph osd create /dev/sdd -wal_dev /dev/sdf > > This approach worked fine, but took a lot of time. > > So I figured it would be better to change the wal for the existing osd: > At this state /dev/sdf is completly empty (has no lvm/wiped) and all > three osd still use the failing wal device /dev/sdh. > > ceph-volume lvm new-wal --osd-id 17 --osd-fsid > 01234567-1234-1234-123456789012 --target /dev/sdf > > Which obviously fails, because the target should be --target > vgname/new_wal > > Question part: > What would be the fast way to make the new device /dev/sdf the wal > device, without destroying the osds 17 18 19? > I have moved wal/db between physical partitions to increase size in some old upgraded clusters. It should be similar. Try searching "Ceph bluestore db resize". Otherwise I can send you my procedure with spanish comments... :) Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From tsabolov at t8.ru Thu Jan 13 09:13:53 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 13 Jan 2022 11:13:53 +0300 Subject: [PVE-User] Remove 1-2 OSD from PVE Cluster Message-ID: <37cd7904-a225-76f6-7bd3-973a75be2c74@t8.ru> Hello to all. I have cluster with 7 node. Storage for VM disk and others pool data is on ceph version 15.2.15 (4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable) On pve-7 I have 10 OSD and for test I want to remove 2 osd from this node. I write some steps command how I remove this OSD from pve-7 cluster and ceph storage. 1) root at pve-7 ~ # ceph osd tree 2) ceph osd reweight 2.1) ceph osd reweight osd.${ID} 0.98 2.2) ceph osd reweight osd.${ID} 0.0 can I set the 0.0 to clean osd.X before remove it? 3) When osd is clean from data can I? ceph osd down osd.${ID} 4) Remove the osd from cluster ceph osd out osd.${ID} 5) Stop the OSD and umount the osd systemctl stop ceph-osd@${ID} umount /var/lib/ceph/osd/ceph-${ID} 6) Remove the osd from CRUSH map: ceph osd crush remove osd.${ID} 7) Remove the user of OSD: ceph auth del osd.${ID} 8) And now full delete the OSD: ceph osd rm osd.${ID} 9) Last command ceph osd tree (removed OSD not showing Can some one suggest me my steps command is correct ? Or I need change some steps? Thanks . -- ------------------------- Sergey TS The best Regard From elacunza at binovo.es Thu Jan 13 15:48:11 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 13 Jan 2022 15:48:11 +0100 Subject: [PVE-User] Remove 1-2 OSD from PVE Cluster In-Reply-To: <37cd7904-a225-76f6-7bd3-973a75be2c74@t8.ru> References: <37cd7904-a225-76f6-7bd3-973a75be2c74@t8.ru> Message-ID: <8973317e-c510-11ae-7705-af85ed3ad063@binovo.es> Hi, Why not use "Remove OSD" button in PVE WUI? :-) El 13/1/22 a las 9:13, ?????? ??????? escribi?: > Hello to all. > > I have cluster with 7 node. > > Storage for VM disk and others pool data is on ceph version 15.2.15 > (4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable) > > On pve-7 I have 10 OSD and for test I want to remove 2 osd from this > node. > > I write some steps command how I remove this OSD from pve-7 cluster > and ceph storage. > > 1) root at pve-7 ~ # ceph osd tree > > 2) ceph osd reweight > 2.1) ceph osd reweight osd.${ID} 0.98 > 2.2) ceph osd reweight osd.${ID} 0.0 can I set the 0.0 to clean osd.X > before remove it? > > 3) When osd is clean from data can I? > ceph osd down osd.${ID} > > 4) Remove the osd from cluster > ceph osd out osd.${ID} > > 5) Stop the OSD and umount the osd > systemctl stop ceph-osd@${ID} > umount /var/lib/ceph/osd/ceph-${ID} > > 6) Remove the osd from CRUSH map: > ceph osd crush remove osd.${ID} > > 7) Remove the user of OSD: > ceph auth del osd.${ID} > > 8) And now full delete the OSD: > ceph osd rm osd.${ID} > > 9) Last command > > ceph osd tree (removed OSD not showing > > Can some one suggest me my steps command is correct ? > > Or I need change some steps? > > Thanks . > Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From tsabolov at t8.ru Thu Jan 13 16:00:30 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 13 Jan 2022 18:00:30 +0300 Subject: [PVE-User] Remove 1-2 OSD from PVE Cluster In-Reply-To: <8973317e-c510-11ae-7705-af85ed3ad063@binovo.es> References: <37cd7904-a225-76f6-7bd3-973a75be2c74@t8.ru> <8973317e-c510-11ae-7705-af85ed3ad063@binovo.es> Message-ID: Hi, I not have Remove OSD in GUI. If I select OSD.X? have:? Start (if stoped)? , Stop , Restart, Out , In (if is out) and on More button: Srub , Deep Scrub. But the most important is in gui not have button to clean OSD from data. 13.01.2022 17:48, Eneko Lacunza ?????: > Hi, > > Why not use "Remove OSD" button in PVE WUI? :-) > > El 13/1/22 a las 9:13, ?????? ??????? escribi?: >> Hello to all. >> >> I have cluster with 7 node. >> >> Storage for VM disk and others pool data is on ceph version 15.2.15 >> (4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable) >> >> On pve-7 I have 10 OSD and for test I want to remove 2 osd from this >> node. >> >> I write some steps command how I remove this OSD from pve-7 cluster >> and ceph storage. >> >> 1) root at pve-7 ~ # ceph osd tree >> >> 2) ceph osd reweight >> 2.1) ceph osd reweight osd.${ID} 0.98 >> 2.2) ceph osd reweight osd.${ID} 0.0 can I set the 0.0 to clean osd.X >> before remove it? >> >> 3) When osd is clean from data can I? >> ceph osd down osd.${ID} >> >> 4) Remove the osd from cluster >> ceph osd out osd.${ID} >> >> 5) Stop the OSD and umount the osd >> systemctl stop ceph-osd@${ID} >> umount /var/lib/ceph/osd/ceph-${ID} >> >> 6) Remove the osd from CRUSH map: >> ceph osd crush remove osd.${ID} >> >> 7) Remove the user of OSD: >> ceph auth del osd.${ID} >> >> 8) And now full delete the OSD: >> ceph osd rm osd.${ID} >> >> 9) Last command >> >> ceph osd tree (removed OSD not showing >> >> Can some one suggest me my steps command is correct ? >> >> Or I need change some steps? >> >> Thanks . >> > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Tel. +34 943 569 206 | https://www.binovo.es > Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun > > https://www.youtube.com/user/CANALBINOVO > https://www.linkedin.com/company/37269706/ > > Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From mach at swishmail.com Fri Jan 14 09:18:55 2022 From: mach at swishmail.com (Kris von Mach | Swishmail.com) Date: Fri, 14 Jan 2022 16:18:55 +0800 Subject: FreeBSD VM random high CPU/Disk IO Message-ID: <83cabe9e-0e56-83b7-8070-7e7f81f3511a@swishmail.com> Hello, Since upgrading from Proxmox 6.4 to 7.1 I have experienced issues with FreeBSD VM's. Please keep in mind that I had no issues on 6.4. When the issue occurs, Proxmox summary shows excessively high CPU and Disk IO for the VM, network is normal. CPU shows at near 100% and Disk IO shows at 2.5GB/sec. The VM itself shows regular low cpu usage and keeps on spawning new processes and doesn't kill old ones, they keep on deinit. And load average goes from below 1 to over 200. Nothing is actually using any CPU, so this definitely seems like disk IO issue. There are no errors logged anywhere, on proxmox host or freebsd vm. This happens on both raw and qcow2 VM's. I have tried switching from default io_uring to native and threads, as well as combinations of no cache and writeback using VirtIO SCSI single. On FreeBSD vm, I have also tried different time counters from HPET, TSC-low, to kvmclock. And I've disabled balloon memory just in case. I have also tried different CPU options from host, to actual host processor to kvm64. It is happening on both Intel and Amd hosts, so probably not related to CPU. This happens randomly, usually during busier times. Sometimes it happens within few hours, sometimes it takes days to occur. I have also tried pve-kernel-5.13.19-1-pve and pve-kernel-5.15.7-1-pve. I believe this has something to do with the issue that was occurring on Linux VM's with IO errors. pve-qemu-kvm_6.1.0-3 doesn't seem to fix this issue on FreeBSD VMs. Only way that I could resolve it is to reboot the VM. Anything else I could try? This definitely seems like a bug, as it was working fine under Proxmox 6.4. __ Kris From elacunza at binovo.es Fri Jan 14 09:30:25 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 14 Jan 2022 09:30:25 +0100 Subject: [PVE-User] FreeBSD VM random high CPU/Disk IO In-Reply-To: References: Message-ID: <96813954-cb94-d871-260b-3009c0ca2453@binovo.es> Hi Kris, We have two pfSense VMs on PVE 7.1 clusters, we haven't seen this issue. Both VMs have Ceph storage (Pacific). Did you check memory usage inside VM? If it's spawning new processes and not killing old ones, this seems a swaping issue? El 14/1/22 a las 9:18, Kris von Mach | Swishmail.com via pve-user escribi?: > Hello, > > Since upgrading from Proxmox 6.4 to 7.1 I have experienced issues with > FreeBSD VM's. Please keep in mind that I had no issues on 6.4. > > When the issue occurs, Proxmox summary shows excessively high CPU and > Disk IO for the VM, network is normal. CPU shows at near 100% and Disk > IO shows at 2.5GB/sec. The VM itself shows regular low cpu usage and > keeps on spawning new processes and doesn't kill old ones, they keep > on deinit. And load average goes from below 1 to over 200. Nothing is > actually using any CPU, so this definitely seems like disk IO issue. > > There are no errors logged anywhere, on proxmox host or freebsd vm. > > This happens on both raw and qcow2 VM's. I have tried switching from > default io_uring to native and threads, as well as combinations of no > cache and writeback using VirtIO SCSI single. > > On FreeBSD vm, I have also tried different time counters from HPET, > TSC-low, to kvmclock. > > And I've disabled balloon memory just in case. > > I have also tried different CPU options from host, to actual host > processor to kvm64. It is happening on both Intel and Amd hosts, so > probably not related to CPU. > > This happens randomly, usually during busier times. Sometimes it > happens within few hours, sometimes it takes days to occur. > > I have also tried pve-kernel-5.13.19-1-pve and pve-kernel-5.15.7-1-pve. > > I believe this has something to do with the issue that was occurring > on Linux VM's with IO errors. pve-qemu-kvm_6.1.0-3 doesn't seem to fix > this issue on FreeBSD VMs. > > Only way that I could resolve it is to reboot the VM. > > Anything else I could try? > > This definitely seems like a bug, as it was working fine under Proxmox > 6.4. > > __ > Kris Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From elacunza at binovo.es Fri Jan 14 09:34:28 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 14 Jan 2022 09:34:28 +0100 Subject: proxmox ceph osd option to move wal to a new device In-Reply-To: <5eaaf695-e2fd-bef1-3f1e-a6a2fc89acdf@neusta.de> References: <5eaaf695-e2fd-bef1-3f1e-a6a2fc89acdf@neusta.de> Message-ID: Hi Marco, Sorry for the delay, yesterday was a busy day... I'm posting this to the list too, it may be helpful to others. Remenber, this procedure was for physical partitions and for resizing Bluestore DB. === Cambiar/ampliar la partici?n block.db de Bluestore 1. Obtener la partici?n actual block.db: (BLOCKDB_PART) ls -l /var/lib/ceph/osd/ceph-OSDID/block.db 2. Comprobar datos Bluestrore: ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-OSDID 3. Obtener "Partition unique GUID" de la partici?n block.db: sgdisk -i BLOCKDB_PART_NUM BLOCKDB_PART_DISK 4. Estudiar particiones del disco donde queremos crear la nueva partici?n, para ver d?nde crear la nueva partici?n (BLOCKDB_PART_NUEVO_NUM, BLOCKDB_PART_NUEVO_DISK, BLOCKDB_PART_NUEVO_START_POS) sgdisk -p BLOCKDB_PART_NUEVO_DISK 5. Crear nueva partici?n sgdisk --new=BLOCKDB_PART_NUEVO_NUM:BLOCKDB_PART_NUEVO_START_POS:+30GiB --change-name="BLOCKDB_PART_NUEVO_NUM:ceph block.db" --typecode="BLOCKDB_PART_NUEVO_NUM:30cd0809-c2b2-499c-8879-2d6b78529876" --mbrtogpt BLOCKDB_PART_NUEVO_DISK 6. Recargar las particiones partprobe BLOCKDB_PART_NUEVO_DISK o bien partx -u BLOCKDB_PART_NUEVO_DISK 7. Establecer permisos adecuados a la nueva partici?n chown ceph.ceph BLOCKDB_PART_NUEVO_DEV 8. Parar OSD systemctl stop ceph-osd at OSDID 9. Copiar datos de la partici?n vieja a la nueva dd status=progress if=BLOCKDB_PART_DEV of=BLOCKDB_PART_NUEVO_DEV 10. Eliminar partici?n vieja y poner su "Partition unique GUID" a la nueva: sgdisk --delete=BLOCKDB_PART_NUM BLOCKDB_PART_DISK sgdisk --partition-guid="BLOCKDB_PART_NUEVO_NUM:PARTITION_UNIQUE_IDE" BLOCKDB_PART_NUEVO_DISK 11. Recargar las particiones partprobe BLOCKDB_PART_DISK partprobe BLOCKDB_PART_NUEVO_DISK o bien partx -u BLOCKDB_PART_DISK partx -u BLOCKDB_PART_NUEVO_DISK 12. Ajustar enlace a block.db en OSD cd /var/lib/ceph/osd/ceph-OSDID rm block.db ln -s BLOCKDB_PART_DEV block.db 13. Ampliar de forma efectiva el block.db ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-OSDID 14. Poner en marcha OSD systemctl start ceph-osd at OSDID 15. Compactar OSD en caso de que haya spillover ceph daemon osd.OSDID compact Hope this helps... Cheers El 13/1/22 a las 8:23, Marco Witte escribi?: > Hola Eneko, > > I think this is everything I can offer in spanish except ordering an > orange juice :) > > I would be interested to test out your commandchain on my testcluster. > > Thanks again for your input. Will take a look at "Ceph bluestore db > resize". > > Kind Regards > > Max > Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From mach at swishmail.com Fri Jan 14 09:54:15 2022 From: mach at swishmail.com (Kris von Mach | Swishmail.com) Date: Fri, 14 Jan 2022 16:54:15 +0800 Subject: [PVE-User] FreeBSD VM random high CPU/Disk IO In-Reply-To: References: Message-ID: Hi Eneko, No the VM never runs out of memory and doesn't touch the swap. The processes that start and deinit are dovecot processes, so they are very tiny. pfSense probably doesn't do much Disk IO. So this issue might not show up. Our low use (atleast disk io wise) servers also run fine. It's basically as if the Disk IO stalls momentarly and doesn't return to normal. And so far it hasn't happened where more than one VM on the same host experiences this at the same time. On 1/14/2022 4:30 PM, Eneko Lacunza via pve-user wrote: > Did you check memory usage inside VM? If it's spawning new processes and > not killing old ones, this seems a swaping issue? > From m.witte at neusta.de Fri Jan 14 12:50:06 2022 From: m.witte at neusta.de (Marco Witte) Date: Fri, 14 Jan 2022 12:50:06 +0100 Subject: [PVE-User] proxmox ceph osd option to move wal to a new device In-Reply-To: References: <5eaaf695-e2fd-bef1-3f1e-a6a2fc89acdf@neusta.de> Message-ID: Appreciated. Will testdrive this. Opinion: What we need is: pveceph changewal OSD-ID -wal_dev /dev/sdx if NEW-WAL-DEVICE is a plain device /dev/sdx, it should work similar to: pveceph osd create /dev/sdf -wal_dev /dev/sdx Any git / issue tracker where I can put such a request? On 14.01.22 09:34, Eneko Lacunza via pve-user wrote: > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- ? Marco "Max" Witte - Senior Linux Systemadministrator - neusta GmbH | Ein team neusta Unternehmen Konsul-Smidt-Stra?e 24 28217 Bremen Fon: +49(0)421.20696-0 Fax: +49(0)421.20696-99 l.seinschedt at neusta.de www.neusta.de | www.team-neusta.de www.facebook.com/teamneusta www.twitter.com/teamneusta Gesch?ftsf?hrende Gesellschafter: Fabian Gutsche, Uwe Scheja, Dirk Schwampe, Lars Seinschedt Registergericht: Amtsgericht Bremen Handelsregister: HRB 14039 From elacunza at binovo.es Fri Jan 14 13:35:47 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 14 Jan 2022 13:35:47 +0100 Subject: PBS backup error/timeout - but backups appears in storage Message-ID: <1e2ca074-5f39-5a4a-31bc-fc4a01a0e398@binovo.es> Hi all, We have a PVE backup task configured with a remote PBS. This is working very well, but some days ago a backup failed: INFO: Starting Backup of VM 103 (qemu) INFO: Backup started at 2022-01-13 01:11:34 INFO: status = running INFO: VM Name: odoo INFO: include disk 'scsi0' 'proxmox3_ssd_vm:vm-103-disk-1' 70G INFO: include disk 'scsi1' 'proxmox3_ssd_vm:vm-103-disk-0' 4G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/103/2022-01-13T00:11:34Z' INFO: enabling encryption ERROR: VM 103 qmp command 'backup' failed - got timeout INFO: aborting backup job ERROR: VM 103 qmp command 'backup-cancel' failed - got wrong command id '899927:1374' (expected 899927:1375) INFO: resuming VM again ERROR: Backup of VM 103 failed - VM 103 qmp command 'backup' failed - got timeout INFO: Failed at 2022-01-13 01:14:09 INFO: Backup job finished with errors Some connectivity issue, I guess. What surprises me is that backup appears in storage... with verify stage "OK". PVE is 6.4 and PBS 2.1: PVE node: # pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-4 pve-kernel-helper: 6.4-4 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.124-1-pve: 5.4.124-1 pve-kernel-5.4.106-1-pve: 5.4.106-1 pve-kernel-5.3.18-3-pve: 5.3.18-3 ceph: 14.2.20-pve1 ceph-fuse: 14.2.20-pve1 corosync: 3.1.2-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.20-pve1 libproxmox-acme-perl: 1.1.0 libproxmox-backup-qemu0: 1.1.0-1 libpve-access-control: 6.4-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.4-3 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.2-3 libpve-storage-perl: 6.4-1 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.1.10-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.6-1 pve-cluster: 6.4-1 pve-container: 3.3-6 pve-docs: 6.4-2 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-4 pve-firmware: 3.2-4 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-6 pve-xtermjs: 4.7.0-3 qemu-server: 6.4-2 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.4-pve1 PBS node: ii? proxmox-backup-client 2.1.2-1??????????????????????? amd64??????? Proxmox Backup Client tools ii? proxmox-backup-docs 2.1.2-1??????????????????????? all????????? Proxmox Backup Documentation ii? proxmox-backup-file-restore 2.1.2-1??????????????????????? amd64??????? Proxmox Backup single file restore tools for pxar and block device backups ii? proxmox-backup-restore-image 0.3.1????????????????????????? amd64??????? Kernel/initramfs images for Proxmox Backup single-file restore. ii? proxmox-backup-server 2.1.2-1??????????????????????? amd64??????? Proxmox Backup Server daemon with tools and GUI Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From lists at merit.unu.edu Tue Jan 18 14:02:32 2022 From: lists at merit.unu.edu (mj) Date: Tue, 18 Jan 2022 14:02:32 +0100 Subject: [PVE-User] windows remote desktop services as VM on proxmox Message-ID: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> Hi, We are looking into windows remote desktop services, and would like to see how feasible it is to run it through our proxmox cluster. Currently our VMs are all linux, so we have no experience with windows VMs on proxmox. I have read some of the docs on the pve wiki, like https://pve.proxmox.com/wiki/Windows_10_guest_best_practices But looking for some practical guidance, like how well do windows VMs run under proxmox? Anything special to consider? Thanks! From dpl at ass.de Tue Jan 18 14:27:15 2022 From: dpl at ass.de (Daniel Plominski) Date: Tue, 18 Jan 2022 13:27:15 +0000 Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> Message-ID: A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 831 bytes Desc: OpenPGP digital signature URL: From dpl at ass.de Tue Jan 18 14:39:02 2022 From: dpl at ass.de (Daniel Plominski) Date: Tue, 18 Jan 2022 13:39:02 +0000 Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> Message-ID: Hello MJ, we have a lot of Windows VMs running in our Proxmox Cluster. The performance is very good, for most legacy / normal "2D" applications the software (RDP) rendering is sufficient. For special applications (CAD software) we have additional graphics cards passed through via "GPU passthrough" (Intel & AMD servers). This all works very well. Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From gaio at lilliput.linux.it Wed Jan 19 09:19:33 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Wed, 19 Jan 2022 09:19:33 +0100 Subject: [PVE-User] Force reclaiming space on a vdisk... Message-ID: Situation: VM with a disk on a ZFS storage, 2TB disk; the disk have some 'system' partitions, and two big partition /dev/sda7 and /dev/sda8 for /home and /srv. I've added two more disk (/dev/sdb and /dev/sdc), created one partition per disk and moved data from old partition to the ones. After that, i've deleted /dev/sda7 and /dev/sda8, so now /dev/sda is a 2TB disk with roughly 100GB of data in. Space get not reclaimed. All disks have 'discard=1'. I've tried to move /dev/sda to another ZFS storage (that have 500GB of free space), and move fail. How can i reclaim the free space on /dev/sda?! I need to create one partition on the free space, format it, fstrim it? Thanks. -- Siamo circondati da troppa gente piena di s?. E a quelli pieni di s?, io preferisco le persone piene di se, di ma, di forse. (Tonio Dell'Olio) From dpl at ass.de Wed Jan 19 09:49:14 2022 From: dpl at ass.de (Daniel Plominski) Date: Wed, 19 Jan 2022 08:49:14 +0000 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: References: Message-ID: Hello Marco, enable ZFS set compression=lz4 on the zvolume and perform a memory limit dd run (inside the vm). dd if=/dev/zero of=/mountpoint_sda7/CLEANUP bs=99M count=xxx (this will release the storage space) Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From tsabolov at t8.ru Wed Jan 19 12:22:29 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 19 Jan 2022 14:22:29 +0300 Subject: [PVE-User] Unexpected reboot on of 6 node Message-ID: <58d43866-6451-c266-7985-07ff89c0e05a@t8.ru> Hi, Like in this old thread https://forum.proxmox.com/threads/unexpected-reboots-help-need.34310/ I have similar problem. In cluster I have 7 node. root at pve-1: pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-helper: 6.4-8 pve-kernel-5.4: 6.4-7 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-5.4.106-1-pve: 5.4.106-1 ceph: 15.2.15-pve1~bpo10 ceph-fuse: 15.2.15-pve1~bpo10 corosync: 3.1.2-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: residual config ifupdown2: 3.0.0-1+pve4~bpo10 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.22-pve1~bpo10+1 libproxmox-acme-perl: 1.1.0 libproxmox-backup-qemu0: 1.1.0-1 libpve-access-control: 6.4-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.4-4 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.2-3 libpve-storage-perl: 6.4-1 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.1.13-2 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.6-1 pve-cluster: 6.4-1 pve-container: 3.3-6 pve-docs: 6.4-2 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-4 pve-firmware: 3.3-2 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-6 pve-xtermjs: 4.7.0-3 qemu-server: 6.4-2 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.6-pve1~bpo10+1 I need to disable complete? the? IPMI Watchdog or? Dell IDrac (module "ipmi_watchdog") For Dell IDrac, please desactivate the Automated System Recovery Agent in IDrac configuration. https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x#IPMI_Watchdog_.28module_.22ipmi_watchdog.22.29 Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lists at merit.unu.edu Wed Jan 19 15:29:42 2022 From: lists at merit.unu.edu (mj) Date: Wed, 19 Jan 2022 15:29:42 +0100 Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> Message-ID: <88e93d6d-79d6-2dad-1813-789231552888@merit.unu.edu> Hi Daniel, Thanks very much for your encouraging words. Good to know that this is a viable path, also for windows VMs. MJ Op 18-01-2022 om 14:39 schreef Daniel Plominski: > Hello MJ, > > we have a lot of Windows VMs running in our Proxmox Cluster. > > The performance is very good, for most legacy / normal "2D" applications > the software (RDP) rendering is sufficient. > > For special applications (CAD software) we have additional graphics > cards passed through via "GPU passthrough" (Intel & AMD servers). > > This all works very well. > > Mit freundlichen Gr??en > > DANIEL PLOMINSKI > Leitung IT | Head of IT > > Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de > PGP Key: https://pgp.ass.de/dpl at ass.de.asc > PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 > > Company Logo > > ??????????? ASS-Einrichtungssysteme GmbH > ??????????? ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim > > ??????????? Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph > M?ller > ??????????? Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 > Bottom_Line > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Jan 19 16:08:22 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 19 Jan 2022 16:08:22 +0100 Subject: [PVE-User] Unexpected reboot on of 6 node In-Reply-To: <58d43866-6451-c266-7985-07ff89c0e05a@t8.ru> References: <58d43866-6451-c266-7985-07ff89c0e05a@t8.ru> Message-ID: Hi Sergey, I don't understand very well the issue. Can you post last 100 lines of syslog before reboot? El 19/1/22 a las 12:22, ?????? ??????? escribi?: > Hi, > > Like in this old thread > https://forum.proxmox.com/threads/unexpected-reboots-help-need.34310/ > I have similar problem. > > In cluster I have 7 node. > > root at pve-1: pveversion -v > > proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve) > pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) > pve-kernel-helper: 6.4-8 > pve-kernel-5.4: 6.4-7 > pve-kernel-5.4.143-1-pve: 5.4.143-1 > pve-kernel-5.4.106-1-pve: 5.4.106-1 > ceph: 15.2.15-pve1~bpo10 > ceph-fuse: 15.2.15-pve1~bpo10 > corosync: 3.1.2-pve1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: residual config > ifupdown2: 3.0.0-1+pve4~bpo10 > ksm-control-daemon: 1.3-1 > libjs-extjs: 6.0.1-10 > libknet1: 1.22-pve1~bpo10+1 > libproxmox-acme-perl: 1.1.0 > libproxmox-backup-qemu0: 1.1.0-1 > libpve-access-control: 6.4-3 > libpve-apiclient-perl: 3.1-3 > libpve-common-perl: 6.4-4 > libpve-guest-common-perl: 3.1-5 > libpve-http-server-perl: 3.2-3 > libpve-storage-perl: 6.4-1 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 4.0.6-2 > lxcfs: 4.0.6-pve1 > novnc-pve: 1.1.0-1 > proxmox-backup-client: 1.1.13-2 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.6-1 > pve-cluster: 6.4-1 > pve-container: 3.3-6 > pve-docs: 6.4-2 > pve-edk2-firmware: 2.20200531-1 > pve-firewall: 4.1-4 > pve-firmware: 3.3-2 > pve-ha-manager: 3.1-1 > pve-i18n: 2.3-1 > pve-qemu-kvm: 5.2.0-6 > pve-xtermjs: 4.7.0-3 > qemu-server: 6.4-2 > smartmontools: 7.2-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-2 > zfsutils-linux: 2.0.6-pve1~bpo10+1 > > I need to disable complete? the? IPMI Watchdog or? Dell IDrac (module > "ipmi_watchdog") For Dell IDrac, please desactivate the Automated > System Recovery Agent in IDrac configuration. > > https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x#IPMI_Watchdog_.28module_.22ipmi_watchdog.22.29 > > > Sergey TS > The best Regard > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From danielb at numberall.com Wed Jan 19 20:12:44 2022 From: danielb at numberall.com (Daniel Bayerdorffer) Date: Wed, 19 Jan 2022 14:12:44 -0500 (EST) Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> Message-ID: <1103806153.136951.1642619564206.JavaMail.zimbra@numberall.com> Hello Daniel, I've been trying to do a similar setup. The problem I run into, is that the RDP session wants to force the OpenGL driver into software mode instead of using the GPU. Did you have to change any settings to force it to use the GPU for RDP? Thanks, Daniel -- Daniel Bayerdorffer, VP danielb at numberall.com Numberall Stamp & Tool Co., Inc. www.numberall.com Reuleaux Models www.reuleauxmodels.com CypherSafe www.cyphersafe.io PO BOX 187, Sangerville, ME 04479 USA TEL: 207-876-3541 FAX: 207-876-3566 ----- Original Message ----- From: "Daniel Plominski" To: "Proxmox VE user list" Sent: Tuesday, January 18, 2022 8:39:02 AM Subject: Re: [PVE-User] windows remote desktop services as VM on proxmox Hello MJ, we have a lot of Windows VMs running in our Proxmox Cluster. The performance is very good, for most legacy / normal "2D" applications the software (RDP) rendering is sufficient. For special applications (CAD software) we have additional graphics cards passed through via "GPU passthrough" (Intel & AMD servers). This all works very well. Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From dpl at ass.de Thu Jan 20 07:26:36 2022 From: dpl at ass.de (Daniel Plominski) Date: Thu, 20 Jan 2022 06:26:36 +0000 Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: <1103806153.136951.1642619564206.JavaMail.zimbra@numberall.com> References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> <1103806153.136951.1642619564206.JavaMail.zimbra@numberall.com> Message-ID: Hello Daniel Bayerdorffer, Long story briefly explained on the example of an AMD server with NVIDIA graphics card (on Proxmox 6.4 / 7): 1. activate IOMMU, deactivate framebuffer root at assg25:~# cat /etc/kernel/cmdline root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on iommu=pt video=efifb:off root at assg25:~# root at assg25:~# update-initramfs -u -k all 2. deactivate (nativ) Kerneldrivers root at assg25:~# cat /etc/modprobe.d/blacklist.conf blacklist radeon blacklist nouveau blacklist nvidia blacklist nvidiafb blacklist snd_hda_intel root at assg25:~# 3. load vfio drivers root at assg25:~# cat /etc/modules # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. vfio vfio_iommu_type1 vfio_pci vfio_virqfd # EOF root at assg25:~# 4. search for the appropriate graphics card entry root at assg25:~# root at assg25:~# lspci -v > /tmp/GPU_INFO root at assg25:~# root at assg25:~# grep -A 30 "Quadro P1000" /tmp/GPU_INFO root at assg25:~# lspci -n -s 27:00 27:00.0 0300: 10de:1cb1 (rev a1) 27:00.1 0403: 10de:0fb9 (rev a1) root at assg25:~# 5. configure vfio.conf root at assg25:~# cat /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:1cb1,10de:0fb9 disable_vga=1 root at assg25:~# 6. host reboot sync update-initramfs -u -k all proxmox-boot-tool refresh sync; reboot 7. create a windows vm in ovmf (uefi) mode, machine type: pc-q35-5.2, cpu with hidden and hv-vendor-id flag and gpu (hostpci) passthrough root at assg25:/etc/pve/qemu-server# cat 216.conf # #term41gpu # #GPU - PCIe 27%3A00 # agent: 1,type=virtio balloon: 0 bios: ovmf boot: order=virtio0;net0 cores: 12 cpu: host,hidden=1,hv-vendor-id=proxmox efidisk0: local-zfs:vm-216-disk-1,size=1M hostpci0: 27:00,pcie=1 machine: pc-q35-5.2 memory: 73728 name: term41gpu net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr107,firewall=1 numa: 1 ostype: win10 scsihw: virtio-scsi-single smbios1: uuid=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX sockets: 2 startup: order=116 virtio0: local-zfs:vm-216-disk-0,iothread=1,size=256G vmgenid: XXXXXXX-XXXX-XXXX-XXXXXXXXXXXX root at assg25:/etc/pve/qemu-server# 8. install the windows server drivers from nvidia https://www.nvidia.de/Download/driverResults.aspx/176988/de 9. activate the necessary RemoteFX settings using the active-directory group policies (or the local ones) https://www.leadergpu.com/articles/483-how-to-enable-gpu-rendering-for-microsoft-remote-desktop-on-leadergpu-servers ASS - Der Bildungseinrichter GmbH Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From dpl at ass.de Thu Jan 20 08:10:48 2022 From: dpl at ass.de (Daniel Plominski) Date: Thu, 20 Jan 2022 07:10:48 +0000 Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> <1103806153.136951.1642619564206.JavaMail.zimbra@numberall.com> Message-ID: ... possibly one more note on this ... This solution works well for a handful of users on a Windows Server 2019 Server VM as a Remote Desktop Session Host (RDSH) bound to a graphics card. If a higher density of VMs per graphics card is required, there is no way around an NVIDIA GRID solution using VGPU licensing on VMware. This is all a matter of investment. There are projects where even consumer graphics cards with patched firmware allow similar virtual splitting into VGPUs. However, this is not an option in the business environment. An alternative to NVIDIA would be the hardware splitting of AMD Pro graphics card via SR-IOV. Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From tsabolov at t8.ru Thu Jan 20 10:51:42 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 20 Jan 2022 12:51:42 +0300 Subject: [PVE-User] Unexpected reboot one of 6 node in Cluster In-Reply-To: <58d43866-6451-c266-7985-07ff89c0e05a@t8.ru> References: <58d43866-6451-c266-7985-07ff89c0e05a@t8.ru> Message-ID: Hi? to all. Is good configure if I enable the /# select watchdog module (default is softdog) #WATCHDOG_MODULE=ipmi_watchdog/ /For now I have / /lsmod | grep dog softdog??????????????? 16384? 2 / / / 19.01.2022 14:22, ?????? ??????? ?????: > Hi, > > Like in this old thread > https://forum.proxmox.com/threads/unexpected-reboots-help-need.34310/ > I have similar problem. > > In cluster I have 7 node. > > root at pve-1: pveversion -v > > proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve) > pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) > pve-kernel-helper: 6.4-8 > pve-kernel-5.4: 6.4-7 > pve-kernel-5.4.143-1-pve: 5.4.143-1 > pve-kernel-5.4.106-1-pve: 5.4.106-1 > ceph: 15.2.15-pve1~bpo10 > ceph-fuse: 15.2.15-pve1~bpo10 > corosync: 3.1.2-pve1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: residual config > ifupdown2: 3.0.0-1+pve4~bpo10 > ksm-control-daemon: 1.3-1 > libjs-extjs: 6.0.1-10 > libknet1: 1.22-pve1~bpo10+1 > libproxmox-acme-perl: 1.1.0 > libproxmox-backup-qemu0: 1.1.0-1 > libpve-access-control: 6.4-3 > libpve-apiclient-perl: 3.1-3 > libpve-common-perl: 6.4-4 > libpve-guest-common-perl: 3.1-5 > libpve-http-server-perl: 3.2-3 > libpve-storage-perl: 6.4-1 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 4.0.6-2 > lxcfs: 4.0.6-pve1 > novnc-pve: 1.1.0-1 > proxmox-backup-client: 1.1.13-2 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.6-1 > pve-cluster: 6.4-1 > pve-container: 3.3-6 > pve-docs: 6.4-2 > pve-edk2-firmware: 2.20200531-1 > pve-firewall: 4.1-4 > pve-firmware: 3.3-2 > pve-ha-manager: 3.1-1 > pve-i18n: 2.3-1 > pve-qemu-kvm: 5.2.0-6 > pve-xtermjs: 4.7.0-3 > qemu-server: 6.4-2 > smartmontools: 7.2-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-2 > zfsutils-linux: 2.0.6-pve1~bpo10+1 > > I need to disable complete? the? IPMI Watchdog or? Dell IDrac (module > "ipmi_watchdog") For Dell IDrac, please desactivate the Automated > System Recovery Agent in IDrac configuration. > > https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x#IPMI_Watchdog_.28module_.22ipmi_watchdog.22.29 > > > Sergey TS > The best Regard > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gaio at lilliput.linux.it Thu Jan 20 11:47:28 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Thu, 20 Jan 2022 11:47:28 +0100 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: ; from SmartGate on Thu, Jan 20, 2022 at 12:06:02PM +0100 References: Message-ID: Mandi! Daniel Plominski In chel di` si favelave... > enable ZFS set compression=lz4 on the zvolume Seems just enabled: root at ctpve1:~# zpool get all rpool | grep lz4 rpool feature at lz4_compress active local root at ctpve1:~# zpool get all rpool-data | grep lz4 rpool-data feature at lz4_compress active local > and perform a memory limit dd run (inside the vm). > dd if=/dev/zero of=/mountpoint_sda7/CLEANUP bs=99M count=xxx > (this will release the storage space) OK. But i've just deleted the partitions. I have to create a new partition, format them, create a 'dd-zero' file in them and then the space will be released? Really?! I'm asking because i supposed that was the 'trim/thin' feature of ZFS to permit to shrink a disk (eg, 'don't save the unallocated space'), not the compressione feature (eg, 'don't save a bunch of consecutive zero, compress it'). Speaking more clearly, i hope: i've perfectly clear that 'zeroing' a portion of a disk permit the compression feature of zfs to compress it, but i supposed that was the management of allocated spaces that make their business here... i'm only a bit puzzled. I hope someone can clarify, thanks. PS: this server have a 'Proxmox VE Community Subscription 1 CPU/year' currently active on, but i prefere if possible to use mailing list for this support question. FYI. -- Ognuno vada dove vuole andare, ognuno invecchi come gli pare ma non raccontate a me che cos'e` la LIBERTA`. (F. Guccini) From gaio at lilliput.linux.it Thu Jan 20 11:55:29 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Thu, 20 Jan 2022 11:55:29 +0100 Subject: [PVE-User] Trim and ZFS pools on SSD... Message-ID: I've asked this some month ago, eg i've asked if there's in PVE some 'framework' to enable trim for SSD ZFS pools. At that time the reply was no. Now seems arrived, seems a standard debian feature for 'zfsutils-linux': at the crontab /etc/cron.d/zfsutils-linux : # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi and /usr/lib/zfs-linux/trim script seems to automatically trim every pool that have not 'autotrim' enabled and have defined the custom property: org.debian:periodic-trim but i've not found info on debian and PVE documentation. Someone can confirm that? I've simply to do something like: zpool set org.debian:periodic-trim=yes rpool Thanks. -- Ogni giorno un sistemista Nt si sveglia... e sa che dovr? lavorare. Ogni giorno un sistemista Linux si sveglia... alle 13.00, per il pranzo. Non importa che tu sia sistemista Nt o Linux... tanto ti pagano uguale!!! From leesteken at protonmail.ch Thu Jan 20 12:38:57 2022 From: leesteken at protonmail.ch (Arjen) Date: Thu, 20 Jan 2022 11:38:57 +0000 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: References: Message-ID: When discard is enabled on the virtual disk and also the OS inside the VM executes trim commands for deleted data, then it should inform ZFS that the blocks are free. Marking blocks as usused will not work without the help of the OS inside the VM. Is autotrim enabled for the ZFS pool, or do you run zpool trim (regularly)? Was trimming enabled for the OS inside the VM? Otherwise, you need to create a partition and trim it inside the VM. I hope this helps a bit. Best regards, Arjen ??????? Original Message ??????? On Thursday, January 20th, 2022 at 11:47, Marco Gaiarin wrote: > Mandi! Daniel Plominski > > In chel di` si favelave... > > > enable ZFS set compression=lz4 on the zvolume > > Seems just enabled: > > root at ctpve1:~# zpool get all rpool | grep lz4 > > rpool feature at lz4_compress active local > > root at ctpve1:~# zpool get all rpool-data | grep lz4 > > rpool-data feature at lz4_compress active local > > > and perform a memory limit dd run (inside the vm). > > > > dd if=/dev/zero of=/mountpoint_sda7/CLEANUP bs=99M count=xxx > > > > (this will release the storage space) > > OK. But i've just deleted the partitions. I have to create a new partition, > > format them, create a 'dd-zero' file in them and then the space will be > > released? > > Really?! > > I'm asking because i supposed that was the 'trim/thin' feature of ZFS to > > permit to shrink a disk (eg, 'don't save the unallocated space'), not the > > compressione feature (eg, 'don't save a bunch of consecutive zero, compress > > it'). > > Speaking more clearly, i hope: i've perfectly clear that 'zeroing' a portion > > of a disk permit the compression feature of zfs to compress it, but i > > supposed that was the management of allocated spaces that make their > > business here... i'm only a bit puzzled. > > I hope someone can clarify, thanks. > > PS: this server have a 'Proxmox VE Community Subscription 1 CPU/year' > > currently active on, but i prefere if possible to use mailing list for this > > support question. FYI. > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > Ognuno vada dove vuole andare, ognuno invecchi come gli pare > > ma non raccontate a me che cos'e `la LIBERTA`. (F. Guccini) > > pve-user mailing list > > pve-user at lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From ralf.storm at konzept-is.de Thu Jan 20 12:52:07 2022 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Thu, 20 Jan 2022 12:52:07 +0100 Subject: [PVE-User] Trim and ZFS pools on SSD... In-Reply-To: References: Message-ID: <2b2b8b20-480a-38d4-2a5c-cc58afac7fa5@konzept-is.de> Hi, make sure your Vdisks are connected via scsi and the scsi controller is virtioscsi, the discs need to have the checkmark "discard". You can check if it works under ubuntu and debian wit hte command "fstrim -av" - if there is no output it doesn`t work, otherwise, after a while you will get the trim results displayed. hope that helps.... Am 20/01/2022 um 11:55 schrieb Marco Gaiarin: > I've asked this some month ago, eg i've asked if there's in PVE some > 'framework' to enable trim for SSD ZFS pools. At that time the reply was no. > > > Now seems arrived, seems a standard debian feature for 'zfsutils-linux': at > the crontab /etc/cron.d/zfsutils-linux : > > # TRIM the first Sunday of every month. > 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi > > and /usr/lib/zfs-linux/trim script seems to automatically trim every pool > that have not 'autotrim' enabled and have defined the custom property: > > org.debian:periodic-trim > > but i've not found info on debian and PVE documentation. > > > Someone can confirm that? I've simply to do something like: > > zpool set org.debian:periodic-trim=yes rpool > > > Thanks. > -- Ralf Storm Systemadministrator Konzept Informationssysteme GmbH Am Weiher 13 ? 88709 Meersburg Fon: +49 7532 4466-299 Fax: +49 7532 4466-66 ralf.storm at konzept-is.de www.konzept-is.de Amtsgericht Freiburg 581491 ? Gesch?ftsf?hrer: Dr. Peer Griebel, Frank H??ler From dpl at ass.de Thu Jan 20 13:23:06 2022 From: dpl at ass.de (Daniel Plominski) Date: Thu, 20 Jan 2022 12:23:06 +0000 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: References: Message-ID: Hello, we have had very bad experience with trim, due to a virtio driver error, a special release, the trim resulted in a corrupt Windows NTFS file system. The bug is fixed in the latest stable VirtIO drivers, but we only use the legacy method using targeted "dd" (zero) cleanup on Linux / BSD VMs and on Windows using "fsutil". >From the perspective of the HOST system, a zvolume formatted with filesystem and stored data is simply a "data container" with unstructured data (depending on the structure, however, this can be compressed well). If now within the VM an area with zeros is written, this zvolume area is stimulated to overwrite the contained data, the process is compressed and the occupied blocks are released. The block free up should also work with the ZFS algorithm "zle". How often the cells of an SSD / NVME memory are overwritten depends on the internal intelligence of the firmware / controller of the respective memory. TRIM "Discard" (if supported by the hardware, the hypervisor, the hypervisor driver and the guest OS) can be used to better control this remapping. ASS - Der Bildungseinrichter GmbH Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From dpl at ass.de Thu Jan 20 14:16:59 2022 From: dpl at ass.de (Daniel Plominski) Date: Thu, 20 Jan 2022 13:16:59 +0000 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: Hello Martin, what is the exact use case? If it is "only" about providing a robust ZFS based storage server for (NFS, SMB, ISCSI), then I can only recommend TrueNAS CORE. One of our servers with 128 GB RAM handles about 3 million snapshots without problems (with a complex replication structure). After about 10 years of ZFS experience, I find the ZFS implementation in the FreeBSD kernel to be one of the most robust besides Solaris. If you also want to virtualize on this machine, Proxmox itself or TrueNAS Scale would of course still be a possibility. For larger data server setups, however, I think the interaction between the FreeBSD kernel and ZFS is better than ZFSonLinux. Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line From elacunza at binovo.es Fri Jan 21 10:27:08 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 21 Jan 2022 10:27:08 +0100 Subject: 6.4 backup issue Message-ID: <4b536288-6645-6582-a29b-aa75b241c442@binovo.es> Hi all, 3 days ago we updated a PVE 6.0 host to 6.4 . It has been working without issue for more than a year since last update until then. After the update, one of the VMs has issues with backups: INFO: Starting Backup of VM 105 (qemu) INFO: Backup started at 2022-01-21 00:05:35 INFO: status = running INFO: VM Name: XXX INFO: include disk 'virtio0' 'local:105/vm-105-disk-1.raw' 55G INFO: include disk 'virtio1' 'local:105/vm-105-disk-2.raw' 50G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating vzdump archive '/mnt/pve/nas2/dump/vzdump-qemu-105-2022_01_21-00_05_35.vma.lzo' INFO: started backup task '26485871-deae-463f-839f-437fcafda023' INFO: resuming VM again ERROR: VM 105 qmp command 'cont' failed - got timeout INFO: aborting backup job INFO: resuming VM again ERROR: Backup of VM 105 failed - VM 105 qmp command 'cont' failed - got timeout INFO: Failed at 2022-01-21 00:06:00 Other VMs' backups work without issues. Scheduled backup for this VM has failed 3 times. Launching backup by hand fails sometimes, but after one or two retries, it works. Any idea? :-) Storage is default "local", filesystem-based. Backup is being done for al VMs to local NFS server. # pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-11 pve-kernel-helper: 6.4-11 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.157-1-pve: 5.4.157-1 pve-kernel-4.15: 5.4-8 pve-kernel-5.0.21-5-pve: 5.0.21-10 pve-kernel-5.0.21-1-pve: 5.0.21-2 pve-kernel-4.15.18-20-pve: 4.15.18-46 pve-kernel-4.4.134-1-pve: 4.4.134-112 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.5-pve2~bpo10+1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.22-pve2~bpo10+1 libproxmox-acme-perl: 1.1.0 libproxmox-backup-qemu0: 1.1.0-1 libpve-access-control: 6.4-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.4-4 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.2-3 libpve-storage-perl: 6.4-1 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.1.13-2 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.6-1 pve-cluster: 6.4-1 pve-container: 3.3-6 pve-docs: 6.4-2 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-4 pve-firmware: 3.3-2 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-6 pve-xtermjs: 4.7.0-3 qemu-server: 6.4-2 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.6-pve1~bpo10+1 Thanks EnekoLacunza CTO | Zuzendari teknikoa Binovo IT Human Project 943 569 206 elacunza at binovo.es binovo.es Astigarragako Bidea, 2 - 2 izda. Oficina 10-11, 20180 Oiartzun youtube linkedin From gaio at lilliput.linux.it Fri Jan 21 11:06:18 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Fri, 21 Jan 2022 11:06:18 +0100 Subject: [PVE-User] Trim and ZFS pools on SSD... In-Reply-To: <2b2b8b20-480a-38d4-2a5c-cc58afac7fa5@konzept-is.de>; from SmartGate on Fri, Jan 21, 2022 at 12:06:01PM +0100 References: <2b2b8b20-480a-38d4-2a5c-cc58afac7fa5@konzept-is.de> Message-ID: Mandi! Ralf Storm In chel di` si favelave... > make sure your Vdisks are connected via scsi and the scsi controller is > virtioscsi, the discs need to have the checkmark "discard". > You can check if it works under ubuntu and debian wit hte command > "fstrim -av" - if there is no output it doesn`t work, otherwise, after a > while you will get the trim results displayed. No, sorry, i've posted two 'trim/discard' topic that (apart this) are totally unrelated. I'm speaking now of host OS, eg PVE: if i have an SSD-based ZFS pool, how can i trim it? Seems that recent 'zfsutils-linux' package add some 'framework' to do trim, but seems (totally) undocumented. So i'm seeking feedback. I hope i was clear now... -- Dai diamanti non nasce niente dal letame nascono i fior (F. De Andre`) From gaio at lilliput.linux.it Fri Jan 21 11:03:13 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Fri, 21 Jan 2022 11:03:13 +0100 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: ; from SmartGate on Fri, Jan 21, 2022 at 12:06:01PM +0100 References: Message-ID: Mandi! Arjen via pve-user In chel di` si favelave... > When discard is enabled on the virtual disk and also the OS inside the VM executes trim commands for deleted data, then it should inform ZFS that the blocks are free. Marking blocks as usused will not work without the help of the OS inside the VM. OK. So, speking practically: a) i've do the wrong things, deleting the partitions, because the guest OS had not the opportunity to reclaim free space; because trim/discard need the cooperation of all the 'chain', probably it was needed to delete all the file in the partition, the trim them, then delete them. b) if the guest OS does not support trim, or support wrongly/buggy, the same result will be achieved (creating and) zeroing the partition, if conpression are enabled. This does not involve trim/discard, but only compression, acheving the same result indeed. > Was trimming enabled for the OS inside the VM? Otherwise, you need to create a partition and trim it inside the VM. Guest os support fstrim, so i'll try a). Thanks. -- Con Windows sei in vacanza: ti diverti senza pensare a ci? che fai, ma dopo un po' finisce. In Linux entri nella vita reale: Devi tirar fuori le palle! (Alain Modolo) From leesteken at protonmail.ch Fri Jan 21 12:16:36 2022 From: leesteken at protonmail.ch (Arjen) Date: Fri, 21 Jan 2022 11:16:36 +0000 Subject: [PVE-User] Trim and ZFS pools on SSD... In-Reply-To: References: <2b2b8b20-480a-38d4-2a5c-cc58afac7fa5@konzept-is.de> Message-ID: On Friday, January 21st, 2022 at 11:06, Marco Gaiarin wrote: > Mandi! Ralf Storm > > In chel di` si favelave... > > > make sure your Vdisks are connected via scsi and the scsi controller is > > virtioscsi, the discs need to have the checkmark "discard". > > You can check if it works under ubuntu and debian wit hte command > > "fstrim -av" - if there is no output it doesn`t work, otherwise, after a > > while you will get the trim results displayed. > > No, sorry, i've posted two 'trim/discard' topic that (apart this) are > totally unrelated. > I'm speaking now of host OS, eg PVE: if i have an SSD-based ZFS pool, how > can i trim it? > > Seems that recent 'zfsutils-linux' package add some 'framework' to do trim, > but seems (totally) undocumented. So i'm seeking feedback. > > I hope i was clear now... You can enabled the autotrim feature of the ZFS pool or run zpool trim . To get more information use this command on PVE: man zpool trim Is that what you were asking for? From tsabolov at t8.ru Fri Jan 21 12:10:27 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Fri, 21 Jan 2022 14:10:27 +0300 Subject: [PVE-User] openvswitch + bond0 + 2 Fiber interfaces. Message-ID: Hello, I have PVE cluster and I thinking to install on? the pve-7 openvswitch for can move and add VM from other networks and Proxmox Cluster With base Linux bridge all work well without problem with 2 interface 10GB ens1f0np0 ens1f12np0 I? install openvswitch? with manual https://pve.proxmox.com/wiki/Open_vSwitch I want use Fiber? 10GB interfaces ens1f0np0 ens1f12np0? with Bond I think. I try some settings but is not working. My setup in interfaces: auto lo iface lo inet loopback auto ens1f12np0 iface ens1f12np0 inet manual #Fiber iface idrac inet manual iface eno2 inet manual iface eno3 inet manual iface eno4 inet manual auto ens1f0np0 iface ens1f0np0 inet manual iface eno1 inet manual auto inband iface inband inet static ??? address 10.10.29.10/24 ??? gateway 10.10.29.250 ??? ovs_type OVSIntPort ??? ovs_bridge vmbr0 #Proxmox Web Access auto vlan10 iface vlan10 inet manual ??? ovs_type OVSIntPort ??? ovs_bridge vmbr0 ??? ovs_options tag=10 #Network 10 auto bond0 iface bond0 inet manual ??? ovs_bonds ens1f0np0 ens1f12np0 ??? ovs_type OVSBond ??? ovs_bridge vmbr0 ??? ovs_mtu 9000 ??? ovs_options bond_mode=active-backup auto vmbr0 iface vmbr0 inet manual ??? ovs_type OVSBridge ??? ovs_ports bond0 inband vlan10 ??? ovs_mtu 9000 #inband Can some one help me if I set all correctly or not? If someone have setup openvswitch with Bond interfaces 10GB share with me configuration. Thank at lot. Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From d.alexandris at gmail.com Fri Jan 21 13:03:17 2022 From: d.alexandris at gmail.com (Dimitri Alexandris) Date: Fri, 21 Jan 2022 14:03:17 +0200 Subject: [PVE-User] openvswitch + bond0 + 2 Fiber interfaces. In-Reply-To: References: Message-ID: I have Openvswitch bonds working fine for years now, but in older versions of Proxmox (6.4-4 and 5.3-5): -------------- auto eno2 iface eno2 inet manual auto eno1 iface eno1 inet manual allow-vmbr0 ath iface ath inet static address 10.NN.NN.38/26 gateway 10.NN.NN.1 ovs_type OVSIntPort ovs_bridge vmbr0 ovs_options tag=100 . . allow-vmbr0 bond0 iface bond0 inet manual ovs_bonds eno1 eno2 ovs_type OVSBond ovs_bridge vmbr0 ovs_options bond_mode=balance-slb lacp=active allow-ovs vmbr0 iface vmbr0 inet manual ovs_type OVSBridge ovs_ports bond0 ath lan dmz_vod ampr -------- I think now, "allow-vmbr0" and "allow-ovs" are replaced with "auto". This bond works fine with HP, 3COM, HUAWEI, and MIKROTIK switches. Several OVSIntPort VLANS are attached to it. I also had 10G bonds (Intel, Supermicro inter-server links), with the same result. I see the only difference with your setup is the bond_mode. Switch setup is also very important to match this. On Fri, Jan 21, 2022 at 1:23 PM ?????? ??????? wrote: > Hello, > > I have PVE cluster and I thinking to install on the pve-7 openvswitch > for can move and add VM from other networks and Proxmox Cluster > > With base Linux bridge all work well without problem with 2 interface > 10GB ens1f0np0 ens1f12np0 > > I install openvswitch with manual > https://pve.proxmox.com/wiki/Open_vSwitch > > I want use Fiber 10GB interfaces ens1f0np0 ens1f12np0 with Bond I think. > > I try some settings but is not working. > > My setup in interfaces: > > auto lo > iface lo inet loopback > > auto ens1f12np0 > iface ens1f12np0 inet manual > #Fiber > > iface idrac inet manual > > iface eno2 inet manual > > iface eno3 inet manual > > iface eno4 inet manual > > auto ens1f0np0 > iface ens1f0np0 inet manual > > iface eno1 inet manual > > auto inband > iface inband inet static > address 10.10.29.10/24 > gateway 10.10.29.250 > ovs_type OVSIntPort > ovs_bridge vmbr0 > #Proxmox Web Access > > auto vlan10 > iface vlan10 inet manual > ovs_type OVSIntPort > ovs_bridge vmbr0 > ovs_options tag=10 > #Network 10 > > auto bond0 > iface bond0 inet manual > ovs_bonds ens1f0np0 ens1f12np0 > ovs_type OVSBond > ovs_bridge vmbr0 > ovs_mtu 9000 > ovs_options bond_mode=active-backup > > auto vmbr0 > iface vmbr0 inet manual > ovs_type OVSBridge > ovs_ports bond0 inband vlan10 > ovs_mtu 9000 > #inband > > > Can some one help me if I set all correctly or not? > > If someone have setup openvswitch with Bond interfaces 10GB share with > me configuration. > > Thank at lot. > > > Sergey TS > The best Regard > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From tsabolov at t8.ru Fri Jan 21 13:28:27 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Fri, 21 Jan 2022 15:28:27 +0300 Subject: [PVE-User] openvswitch + bond0 + 2 Fiber interfaces. In-Reply-To: References: Message-ID: Dimitri, hello Thank you with you share My Proxmox? is proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve) I try change allow-vmbr0 with auto. I found the link https://metadata.ftp-master.debian.org/changelogs/main/o/openvswitch/testing_openvswitch-switch.README.Debian In section? ex 9: Bond + Bridge + VLAN + MTU? allow is used. But nothing wrong I try allow and auto, just comment one line. Dimitri, thanks again for you share. 21.01.2022 15:03, Dimitri Alexandris ?????: > I have Openvswitch bonds working fine for years now, but in older versions > of Proxmox (6.4-4 and 5.3-5): > > -------------- > auto eno2 > iface eno2 inet manual > > auto eno1 > iface eno1 inet manual > > allow-vmbr0 ath > iface ath inet static > address 10.NN.NN.38/26 > gateway 10.NN.NN.1 > ovs_type OVSIntPort > ovs_bridge vmbr0 > ovs_options tag=100 > . > . > allow-vmbr0 bond0 > iface bond0 inet manual > ovs_bonds eno1 eno2 > ovs_type OVSBond > ovs_bridge vmbr0 > ovs_options bond_mode=balance-slb lacp=active > allow-ovs vmbr0 > iface vmbr0 inet manual > ovs_type OVSBridge > ovs_ports bond0 ath lan dmz_vod ampr > -------- > > I think now, "allow-vmbr0" and "allow-ovs" are replaced with "auto". > > This bond works fine with HP, 3COM, HUAWEI, and MIKROTIK switches. > Several OVSIntPort VLANS are attached to it. > I also had 10G bonds (Intel, Supermicro inter-server links), with the same > result. > > I see the only difference with your setup is the bond_mode. Switch setup > is also very important to match this. > > > > > > On Fri, Jan 21, 2022 at 1:23 PM ?????? ??????? wrote: > >> Hello, >> >> I have PVE cluster and I thinking to install on the pve-7 openvswitch >> for can move and add VM from other networks and Proxmox Cluster >> >> With base Linux bridge all work well without problem with 2 interface >> 10GB ens1f0np0 ens1f12np0 >> >> I install openvswitch with manual >> https://pve.proxmox.com/wiki/Open_vSwitch >> >> I want use Fiber 10GB interfaces ens1f0np0 ens1f12np0 with Bond I think. >> >> I try some settings but is not working. >> >> My setup in interfaces: >> >> auto lo >> iface lo inet loopback >> >> auto ens1f12np0 >> iface ens1f12np0 inet manual >> #Fiber >> >> iface idrac inet manual >> >> iface eno2 inet manual >> >> iface eno3 inet manual >> >> iface eno4 inet manual >> >> auto ens1f0np0 >> iface ens1f0np0 inet manual >> >> iface eno1 inet manual >> >> auto inband >> iface inband inet static >> address 10.10.29.10/24 >> gateway 10.10.29.250 >> ovs_type OVSIntPort >> ovs_bridge vmbr0 >> #Proxmox Web Access >> >> auto vlan10 >> iface vlan10 inet manual >> ovs_type OVSIntPort >> ovs_bridge vmbr0 >> ovs_options tag=10 >> #Network 10 >> >> auto bond0 >> iface bond0 inet manual >> ovs_bonds ens1f0np0 ens1f12np0 >> ovs_type OVSBond >> ovs_bridge vmbr0 >> ovs_mtu 9000 >> ovs_options bond_mode=active-backup >> >> auto vmbr0 >> iface vmbr0 inet manual >> ovs_type OVSBridge >> ovs_ports bond0 inband vlan10 >> ovs_mtu 9000 >> #inband >> >> >> Can some one help me if I set all correctly or not? >> >> If someone have setup openvswitch with Bond interfaces 10GB share with >> me configuration. >> >> Thank at lot. >> >> >> Sergey TS >> The best Regard >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From danielb at numberall.com Fri Jan 21 21:35:11 2022 From: danielb at numberall.com (Daniel Bayerdorffer) Date: Fri, 21 Jan 2022 15:35:11 -0500 (EST) Subject: [PVE-User] windows remote desktop services as VM on proxmox In-Reply-To: References: <5488c340-0a68-cb7d-4e65-e9a5fcfa7f23@merit.unu.edu> <1103806153.136951.1642619564206.JavaMail.zimbra@numberall.com> Message-ID: <1458106184.155656.1642797311299.JavaMail.zimbra@numberall.com> Hi Daniel, Thank you so much for your help. That link at the end with the Group Policy Settings is what solved my issue! Kind Regards, Daniel -- Daniel Bayerdorffer, VP danielb at numberall.com Numberall Stamp & Tool Co., Inc. www.numberall.com Reuleaux Models www.reuleauxmodels.com CypherSafe www.cyphersafe.io PO BOX 187, Sangerville, ME 04479 USA TEL: 207-876-3541 FAX: 207-876-3566 ----- Original Message ----- From: "Daniel Plominski" To: "Proxmox VE user list" Sent: Thursday, January 20, 2022 1:26:36 AM Subject: Re: [PVE-User] windows remote desktop services as VM on proxmox Hello Daniel Bayerdorffer, Long story briefly explained on the example of an AMD server with NVIDIA graphics card (on Proxmox 6.4 / 7): 1. activate IOMMU, deactivate framebuffer root at assg25:~# cat /etc/kernel/cmdline root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on iommu=pt video=efifb:off root at assg25:~# root at assg25:~# update-initramfs -u -k all 2. deactivate (nativ) Kerneldrivers root at assg25:~# cat /etc/modprobe.d/blacklist.conf blacklist radeon blacklist nouveau blacklist nvidia blacklist nvidiafb blacklist snd_hda_intel root at assg25:~# 3. load vfio drivers root at assg25:~# cat /etc/modules # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. vfio vfio_iommu_type1 vfio_pci vfio_virqfd # EOF root at assg25:~# 4. search for the appropriate graphics card entry root at assg25:~# root at assg25:~# lspci -v > /tmp/GPU_INFO root at assg25:~# root at assg25:~# grep -A 30 "Quadro P1000" /tmp/GPU_INFO root at assg25:~# lspci -n -s 27:00 27:00.0 0300: 10de:1cb1 (rev a1) 27:00.1 0403: 10de:0fb9 (rev a1) root at assg25:~# 5. configure vfio.conf root at assg25:~# cat /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:1cb1,10de:0fb9 disable_vga=1 root at assg25:~# 6. host reboot sync update-initramfs -u -k all proxmox-boot-tool refresh sync; reboot 7. create a windows vm in ovmf (uefi) mode, machine type: pc-q35-5.2, cpu with hidden and hv-vendor-id flag and gpu (hostpci) passthrough root at assg25:/etc/pve/qemu-server# cat 216.conf # #term41gpu # #GPU - PCIe 27%3A00 # agent: 1,type=virtio balloon: 0 bios: ovmf boot: order=virtio0;net0 cores: 12 cpu: host,hidden=1,hv-vendor-id=proxmox efidisk0: local-zfs:vm-216-disk-1,size=1M hostpci0: 27:00,pcie=1 machine: pc-q35-5.2 memory: 73728 name: term41gpu net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr107,firewall=1 numa: 1 ostype: win10 scsihw: virtio-scsi-single smbios1: uuid=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX sockets: 2 startup: order=116 virtio0: local-zfs:vm-216-disk-0,iothread=1,size=256G vmgenid: XXXXXXX-XXXX-XXXX-XXXXXXXXXXXX root at assg25:/etc/pve/qemu-server# 8. install the windows server drivers from nvidia https://www.nvidia.de/Download/driverResults.aspx/176988/de 9. activate the necessary RemoteFX settings using the active-directory group policies (or the local ones) https://www.leadergpu.com/articles/483-how-to-enable-gpu-rendering-for-microsoft-remote-desktop-on-leadergpu-servers ASS - Der Bildungseinrichter GmbH Mit freundlichen Gr??en DANIEL PLOMINSKI Leitung IT | Head of IT Telefon 09265 808-151 | Mobil 0151 58026316 | dpl at ass.de PGP Key: https://pgp.ass.de/dpl at ass.de.asc PGP Fingerprint: 74DBC06BD9F63187C4DF8934C96585A89CFC10B3 Company Logo ASS-Einrichtungssysteme GmbH ASS-Adam-Stegner-Stra?e 19 | D-96342 Stockheim Gesch?ftsf?hrer: Matthias Stegner, Michael Stegner, Ralph M?ller Amtsgericht Coburg HRB 3395 | Ust-ID: DE218715721 Bottom_Line _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From uwe.sauter.de at gmail.com Mon Jan 24 11:16:08 2022 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Mon, 24 Jan 2022 11:16:08 +0100 Subject: [PVE-User] Is there a way to securely delete images from Ceph? Message-ID: <7428448f-2aa9-5115-3832-e679108a56b6@gmail.com> Hi list, just a quick question: is there a way to securely (cryptographicly) erase images in Ceph? Overwriting / shredding the VM's block device from a live ISO is probably not enough given that Ceph might use Copy on Write. So does Ceph provide something alike? Regards, Uwe From gaio at lilliput.linux.it Tue Jan 25 11:44:29 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Tue, 25 Jan 2022 11:44:29 +0100 Subject: [PVE-User] Force reclaiming space on a vdisk... In-Reply-To: ; from SmartGate on Tue, Jan 25, 2022 at 21:06:01PM +0100 References: Message-ID: > Guest os support fstrim, so i'll try a). For the archive: a) worked. Also, formatting (i've used mke2fs) trigger trim, so i was not forced to format and thet trim the disk, only format. Thanks. -- Utopia aveva una sorella maggiore, che si chiamava Verita` senza errore (Nomadi) From gaio at lilliput.linux.it Tue Jan 25 12:17:15 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Tue, 25 Jan 2022 12:17:15 +0100 Subject: [PVE-User] Trim and ZFS pools on SSD... In-Reply-To: ; from SmartGate on Tue, Jan 25, 2022 at 21:06:01PM +0100 References: Message-ID: Mandi! Arjen via pve-user In chel di` si favelave... > You can enabled the autotrim feature of the ZFS pool or run zpool trim . > To get more information use this command on PVE: man zpool trim > Is that what you were asking for? No, 'autotrim' is a potentially dangerous option, because 'trim' can happen on a inpredictable way, leading to server load, ... It is better to schedue trim in non-working hour, so the script. -- chi si convertiva nel novanta ne era dispensato nel novantuno (F. De Andre`) From gaio at lilliput.linux.it Tue Jan 25 12:15:16 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Tue, 25 Jan 2022 12:15:16 +0100 Subject: [PVE-User] Trim and ZFS pools on SSD... In-Reply-To: ; from SmartGate on Tue, Jan 25, 2022 at 21:06:01PM +0100 References: Message-ID: <2n25ci-49g.ln1@hermione.lilliput.linux.it> > Someone can confirm that? I've simply to do something like: > zpool set org.debian:periodic-trim=yes rpool Seems sufficient to do: zfs set org.debian:periodic-trim=enable rpool to have the pool get trimmed. I hope will be documented, sooner or later. -- Voi non ci crederete la mia ragazza sogna (R. Vecchioni) From tsabolov at t8.ru Thu Jan 27 09:53:56 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 27 Jan 2022 11:53:56 +0300 Subject: [PVE-User] openvswitch + bond0 + 2 Fiber interfaces. In-Reply-To: References: Message-ID: <723dff71-7daf-6a4d-cb0f-7c2aac5f1bcc@t8.ru> Hello, I have before 30 min this problem. I complete the interfaces edit by cli. But I have problem when I try to edit from GUI something like add comment or add new OVS Bridge or new OVS IntPort (new lan) GUI answer something like # OVS bond 'bond0' - wrong interface type on slave 'ens1f0np0' ('' != 'eth') (500) # OVS bond 'bond1' - wrong interface type on slave 'eno1' ('' != 'eth') (500) Below my Interfaces : auto lo iface lo inet loopback auto ens1f0np0 iface ens1f0np0 inet manual ?? ?ovs_type OVSPort ?? ?ovs_mtu 9000 #Fiber auto ens1f12np0 iface ens1f12np0 inet manual ?? ?ovs_type OVSPort ?? ?ovs_mtu? 9000 #Fiber auto eno1 iface eno1 inet manual ?? ?ovs_type OVSPort auto eno2 iface eno2 inet manual ?? ?ovs_type OVSPort auto vlan10 iface vlan10 inet static ?? ?address 10.10.29.10/24 ?? ?gateway 10.10.29.250 ?? ?ovs_type OVSIntPort ?? ?ovs_bridge vmbr0 ?? ?ovs_mtu 9000 ?? ?ovs_options tag=10 ?? ?ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif auto vlan29 iface vlan29 inet static ?? ?address 192.168.29.96/24 ?? ?ovs_type OVSIntPort ?? ?ovs_bridge vmbr1 ?? ?ovs_mtu 1500 ?? ?ovs_options tag=29 ?? ?ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif auto vlan250 iface vlan250 inet static ?? ?address 192.168.250.100/24 ?? ?ovs_type OVSIntPort ?? ?ovs_bridge vmbr0 ?? ?ovs_mtu 9000 ?? ?ovs_options tag=250 ?? ?ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif auto bond0 iface bond0 inet manual ?? ?ovs_type OVSBond ?? ?ovs_bridge vmbr0 ?? ?ovs_bonds ens1f0np0??? ens1f12np0 ?? ?ovs_options trunks=10,29,250 bond_mode=active-backup ?? ?ovs_mtu??? ??? 9000 #Fiber auto bond1 iface bond1 inet manual ?? ?ovs_type OVSBond ?? ?ovs_bridge vmbr1 ?? ?ovs_bonds eno1??? eno2 ?? ?ovs_options bond_mode=active-backup ?? ?ovs_mtu??? 1500 auto vmbr0 iface vmbr0 inet manual ?? ?ovs_type OVSBridge ?? ?ovs_ports bond0 vlan10 vlan250 ?? ?ovs_mtu 9000 auto vmbr1 iface vmbr1 inet manual ?? ?ovs_type OVSBridge ?? ?ovs_ports bond1 vlan29 ?? ?ovs_mtu 1500 ---------------------------------------------------------------------------- Now I found solution : In interfaces wrong is ovs_type OVSPort if we comment it out or remove the GUI worked fine. auto ens1f0np0 iface ens1f0np0 inet manual #??? ovs_type OVSPort #??? ovs_mtu 9000 #Fiber auto ens1f12np0 iface ens1f12np0 inet manual #??? ovs_type OVSPort #??? ovs_mtu? 9000 #Fiber auto eno1 iface eno1 inet manual #??? ovs_type OVSPort auto eno2 iface eno2 inet manual #??? ovs_type OVSPort 21.01.2022 15:28, ?????? ??????? ?????: > Dimitri, hello > > Thank you with you share > > My Proxmox? is proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve) > > I try change allow-vmbr0 with auto. > > I found the link > https://metadata.ftp-master.debian.org/changelogs/main/o/openvswitch/testing_openvswitch-switch.README.Debian > > > In section? ex 9: Bond + Bridge + VLAN + MTU? allow is used. > > But nothing wrong I try allow and auto, just comment one line. > > > Dimitri, thanks again for you share. > > > > > 21.01.2022 15:03, Dimitri Alexandris ?????: >> I have Openvswitch bonds working fine for years now, but in older >> versions >> of Proxmox (6.4-4 and 5.3-5): >> >> -------------- >> auto eno2 >> iface eno2 inet manual >> >> auto eno1 >> iface eno1 inet manual >> >> allow-vmbr0 ath >> iface ath inet static >> address 10.NN.NN.38/26 >> gateway 10.NN.NN.1 >> ovs_type OVSIntPort >> ovs_bridge vmbr0 >> ovs_options tag=100 >> . >> . >> allow-vmbr0 bond0 >> iface bond0 inet manual >> ?? ovs_bonds eno1 eno2 >> ?? ovs_type OVSBond >> ?? ovs_bridge vmbr0 >> ?? ovs_options bond_mode=balance-slb lacp=active >> allow-ovs vmbr0 >> iface vmbr0 inet manual >> ovs_type OVSBridge >> ovs_ports bond0 ath lan dmz_vod ampr >> -------- >> >> I think now, "allow-vmbr0" and "allow-ovs" are replaced with "auto". >> >> This bond works fine with HP, 3COM, HUAWEI, and MIKROTIK switches. >> Several OVSIntPort VLANS are attached to it. >> I also had 10G bonds (Intel, Supermicro inter-server links), with the >> same >> result. >> >> I see the only difference with your setup is the bond_mode. Switch setup >> is also very important to match this. >> >> >> >> >> >> On Fri, Jan 21, 2022 at 1:23 PM ?????? ???????? wrote: >> >>> Hello, >>> >>> I have PVE cluster and I thinking to install on? the pve-7 openvswitch >>> for can move and add VM from other networks and Proxmox Cluster >>> >>> With base Linux bridge all work well without problem with 2 interface >>> 10GB ens1f0np0 ens1f12np0 >>> >>> I? install openvswitch? with manual >>> https://pve.proxmox.com/wiki/Open_vSwitch >>> >>> I want use Fiber? 10GB interfaces ens1f0np0 ens1f12np0? with Bond I >>> think. >>> >>> I try some settings but is not working. >>> >>> My setup in interfaces: >>> >>> auto lo >>> iface lo inet loopback >>> >>> auto ens1f12np0 >>> iface ens1f12np0 inet manual >>> #Fiber >>> >>> iface idrac inet manual >>> >>> iface eno2 inet manual >>> >>> iface eno3 inet manual >>> >>> iface eno4 inet manual >>> >>> auto ens1f0np0 >>> iface ens1f0np0 inet manual >>> >>> iface eno1 inet manual >>> >>> auto inband >>> iface inband inet static >>> ????? address 10.10.29.10/24 >>> ????? gateway 10.10.29.250 >>> ????? ovs_type OVSIntPort >>> ????? ovs_bridge vmbr0 >>> #Proxmox Web Access >>> >>> auto vlan10 >>> iface vlan10 inet manual >>> ????? ovs_type OVSIntPort >>> ????? ovs_bridge vmbr0 >>> ????? ovs_options tag=10 >>> #Network 10 >>> >>> auto bond0 >>> iface bond0 inet manual >>> ????? ovs_bonds ens1f0np0 ens1f12np0 >>> ????? ovs_type OVSBond >>> ????? ovs_bridge vmbr0 >>> ????? ovs_mtu 9000 >>> ????? ovs_options bond_mode=active-backup >>> >>> auto vmbr0 >>> iface vmbr0 inet manual >>> ????? ovs_type OVSBridge >>> ????? ovs_ports bond0 inband vlan10 >>> ????? ovs_mtu 9000 >>> #inband >>> >>> >>> Can some one help me if I set all correctly or not? >>> >>> If someone have setup openvswitch with Bond interfaces 10GB share with >>> me configuration. >>> >>> Thank at lot. >>> >>> >>> Sergey TS >>> The best Regard >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > Sergey TS > The best Regard > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From wolf at wolfspyre.com Fri Jan 28 20:08:20 2022 From: wolf at wolfspyre.com (Wolf Noble) Date: Fri, 28 Jan 2022 13:08:20 -0600 Subject: [PVE-User] Metrics (add/exclude) on a per node-or-vm basis? Message-ID: Hiya all! I'm looking for the docs on how one might exclude certain node-scoped, or vm-scoped metrics from being emitted... say... the efi blockdev for VMs... I don't see anything in the docs on how to politely ask the metrics emitter to just let certain metrics fall to the floor.... Am I just blind? Additionally, I was looking for the best way, and any known pains/nuances to feed additional metrics INTO proxmox's metrics pipeline... I have been considering just running a script to emit stuff upstream, which I can do, but I also figured there's value in exploring just feeding metrics to a daemon locally and letting the metrics configuration fed to proxmox control where they're emitted. anyone else insane enough to want to do this? or am I the only... uh... crazy one here? :) Wolf Noble Hoof & Paw wolf at wolfspyre.com [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] From tsabolov at t8.ru Mon Jan 31 09:58:48 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Mon, 31 Jan 2022 11:58:48 +0300 Subject: [PVE-User] Ceph df Message-ID: <34d1303e-1b35-6a1d-5221-a762f78792d0@t8.ru> Hi to all. I have cluster with? 7 pve nodes After ceph complete to? set the health: HEALTH_OK I for info? with command check the MAX AVAILABLE? storage : CLASS? SIZE???? AVAIL?? USED??? RAW USED? %RAW USED hdd??? `106 TiB? 96 TiB? 10 TiB??? 10 TiB?????? 9.51 TOTAL? 106 TiB? 96 TiB? 10 TiB??? 10 TiB?????? 9.51 --- POOLS --- POOL?????????????????? ??? ??? ??? ID? PGS? STORED?? OBJECTS USED???? %USED? MAX AVAIL device_health_metrics?? ?? 1??? 1?? 14 MiB?????? 22?? 28 MiB ??? ??? ??? 0???? 42 TiB vm.pool???????????????? ??? ??? ??? 2? 512? 2.7 TiB? ??? 799.37k ?? 8.3 TiB?? 8.95???? 28 TiB cephfs_data???????????? ??? ???? 3?? 32? 927 GiB? 237.28k 1.8 TiB?? 2.11???? 42 TiB cephfs_metadata???????? ???? 4?? 32?? 30 MiB?????? 28 ??? 60 MiB????? 0???? 42 TiB I understand why is show it like that. My question is how I can? decrease MAX AVAIL in default pool device_health_metrics + cephfs_metadata and set it to vm.pool and cephfs_data Thank you. Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From alwin at antreich.com Mon Jan 31 13:05:32 2022 From: alwin at antreich.com (Alwin Antreich) Date: Mon, 31 Jan 2022 12:05:32 +0000 Subject: [PVE-User] Ceph df In-Reply-To: <34d1303e-1b35-6a1d-5221-a762f78792d0@t8.ru> References: <34d1303e-1b35-6a1d-5221-a762f78792d0@t8.ru> Message-ID: <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com> Hello Sergey, January 31, 2022 9:58 AM, "?????? ???????" wrote: > > My question is how I can decrease MAX AVAIL in default pool > device_health_metrics + cephfs_metadata and set it to vm.pool and > cephfs_data The max_avail is calculated by the cluster-wide AVAIL and pool USED, with respect to the replication size / EC profile. Cheers, Alwin