From tsabolov at t8.ru Tue Feb 1 14:59:59 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Tue, 1 Feb 2022 16:59:59 +0300 Subject: [PVE-User] Ceph df In-Reply-To: <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com> References: <34d1303e-1b35-6a1d-5221-a762f78792d0@t8.ru> <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com> Message-ID: <748c01bc-3d9f-5f10-9249-dcfa7b3b8211@t8.ru> Hello Alwin, In this post https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654 I read about *target ratio to 1 and call it a day *, in my case I set to vm.pool? Target ratio 1 : ceph osd pool autoscale-status POOL???????????????????? ??? ??? ??? ??? ??? SIZE TARGET SIZE? ??? RATE? RAW CAPACITY?? RATIO? TARGET RATIO EFFECTIVE RATIO? BIAS? PG_NUM? NEW PG_NUM? AUTOSCALE device_health_metrics? ??? ??? 22216k?????? 500.0G?? 2.0 106.4T? 0.0092 ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? 1.0 ??? ??? ?? 8 on vm.pool??????????????? ?? ??? ??? ? ??? ??? 2734G ??? ??? ??? 3.0??????? 106.4T? 0.0753 ??? ?? 1.0000?????????? ??? ??? ??? ??? ??? ?? ?? ? 0.8180 1.0???? ??? ??? 512 on cephfs_data??????????????? ??? ??? ??? ??? ??? 0 ?? ??? ??? ? 2.0??????? 106.4T? 0.0000 ??? ??? ??? 0.2000 0.1636?? 1.0???? ??? ??? 128 ??? ??? ??? on cephfs_metadata??????? ??? ??? ? 27843k???????? 500.0G 2.0??????? 106.4T? 0.0092 ?? 4.0????? ??? ??? 32????????????? on What you think? I need to set target ratio on cephfs_metadata & device_health_metrics? To pool? cephfs_data I set the target ratio 0.2? . Or the target ration on vm.pool need not the *1* but more? * * 31.01.2022 15:05, Alwin Antreich ?????: > Hello Sergey, > > January 31, 2022 9:58 AM, "?????? ???????" wrote: >> My question is how I can decrease MAX AVAIL in default pool >> device_health_metrics + cephfs_metadata and set it to vm.pool and >> cephfs_data > The max_avail is calculated by the cluster-wide AVAIL and pool USED, with respect to the replication size / EC profile. > > Cheers, > Alwin > Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From alwin at antreich.com Wed Feb 2 05:43:32 2022 From: alwin at antreich.com (Alwin Antreich) Date: Wed, 02 Feb 2022 05:43:32 +0100 Subject: [PVE-User] Ceph df In-Reply-To: References: Message-ID: <13685785-8D44-4E6B-A741-DF0A6BE1A397@antreich.com> Missed to send it to the list as well. Answers inline. On February 1, 2022 2:59:59 PM GMT+01:00, "?????? ???????" wrote: >Hello Alwin, > >In this post >https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654 > >I read about *target ratio to 1 and call it a day *, in my case I set to >vm.pool? Target ratio 1 : > >ceph osd pool autoscale-status >POOL???????????????????? ??? ??? ??? ??? ??? SIZE TARGET SIZE? ??? RATE? >RAW CAPACITY?? RATIO? TARGET RATIO EFFECTIVE RATIO? BIAS? PG_NUM? NEW >PG_NUM? AUTOSCALE >device_health_metrics? ??? ??? 22216k?????? 500.0G?? 2.0 106.4T? 0.0092 > ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ??? 1.0 ??? ??? ?? 8 on >vm.pool??????????????? ?? ??? ??? ? ??? ??? 2734G ??? ??? ??? 3.0??????? >106.4T? 0.0753 ??? ?? 1.0000?????????? ??? ??? ??? ??? ??? ?? ?? ? >0.8180 1.0???? ??? ??? 512 on >cephfs_data??????????????? ??? ??? ??? ??? ??? 0 ?? ??? ??? ? 2.0??????? >106.4T? 0.0000 ??? ??? ??? 0.2000 0.1636?? 1.0???? ??? ??? 128 ??? ??? > ??? on >cephfs_metadata??????? ??? ??? ? 27843k???????? 500.0G 2.0??????? >106.4T? 0.0092 ?? 4.0????? ??? ??? 32????????????? on > >What you think? I need to set target ratio on cephfs_metadata & >device_health_metrics? ``` TARGET RATIO, if present, is the ratio of storage that the administrator has specified that they expect this pool to consume relative to other pools with target ratios set. If both target size bytes and ratio are specified, the ratio takes precedence. ``` From the ceph docs. [0] > >To pool? cephfs_data I set the target ratio 0.2? . > >Or the target ration on vm.pool need not the *1* but more? That's up to the kind of usage you're expecting and the ratio set on other pools. See above and the docs. [0] Cheers Alwin [0] https://docs.ceph.com/en/latest/rados/operations/placement-groups/ From tsabolov at t8.ru Wed Feb 2 08:27:51 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 2 Feb 2022 10:27:51 +0300 Subject: [PVE-User] Ceph df In-Reply-To: References: Message-ID: Hello, I read the documentation before. I know this page. In the part of page placement-groups this part: *TARGET RATIO*, if present, is the ratio of storage that the administrator has specified that they expect this pool to consume relative to other pools with target ratios set. If both target size bytes and ratio are specified, the ratio takes precedence. If I understand? right I can set the Ratio =< 2 or 3 and is right ratio for this pool? I'am correct? 02.02.2022 07:43, Alwin Antreich via pve-user ?????: > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From alwin at antreich.com Wed Feb 2 09:24:46 2022 From: alwin at antreich.com (Alwin Antreich) Date: Wed, 02 Feb 2022 08:24:46 +0000 Subject: [PVE-User] Ceph df In-Reply-To: References: Message-ID: Hello Sergey, February 2, 2022 8:27 AM, "?????? ???????" wrote: > Hello, > > I read the documentation before. I know this page. > > In the part of page placement-groups this part: > > TARGET RATIO, if present, is the ratio of storage that the administrator has specified that they > expect this pool to consume relative to other pools with target ratios set. If both target size > bytes and ratio are specified, the ratio takes precedence. > > If I understand right I can set the Ratio =< 2 or 3 and is right ratio for this pool? I'am correct? As it says in the docs, the ratio is relative to other pools with an ratio. Whatever you want to set, it is relative to these ratios. You can change these ratios at any time, you'll see if they fit when you store data on them. Cheers, Alwin From sir_Misiek1 at o2.pl Thu Feb 3 11:06:05 2022 From: sir_Misiek1 at o2.pl (lord_Niedzwiedz) Date: Thu, 3 Feb 2022 11:06:05 +0100 Subject: [PVE-User] pveam available Message-ID: ??? Hi to all. Containers are available on the site https://uk.lxd.images.canonical.com/images/ How do I add them to proxmox (path) so that they are displayed by the command: pveam available And they were usable with: pveam download local Now its only work like this: root at aaa:/var/lib/vz/template/cache# wget https://uk.lxd.images.canonical.com/images/kali/current/amd64/default/20220131_17:14/rootfs.tar.xz Thank you. Gregory Misiek The best Regard From t.lamprecht at proxmox.com Thu Feb 3 12:56:57 2022 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Thu, 3 Feb 2022 12:56:57 +0100 Subject: [PVE-User] pveam available In-Reply-To: References: Message-ID: <252eb295-235d-6ba7-2484-4fd312c1c216@proxmox.com> Hi, On 03.02.22 11:06, lord_Niedzwiedz wrote: > Containers are available on the site https://uk.lxd.images.canonical.com/images/ > > How do I add them to proxmox (path) so that they are displayed by the command: > pveam available That's not possible at the moment, and there's no CT registry format that would allow that to do in a secure fashion. > > And they were usable with: > pveam download local > > > Now its only work like this: > root at aaa:/var/lib/vz/template/cache# wget https://uk.lxd.images.canonical.com/images/kali/current/amd64/default/20220131_17:14/rootfs.tar.xz > That's not the only way, the API and UI allows to download templates or ISOs directly to a storage from an URL since PVE 7.0, especially the UI integration is quite convenient. https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/storage/{storage}/download-url cheers, Thomas From tsabolov at t8.ru Fri Feb 4 11:15:28 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Fri, 4 Feb 2022 13:15:28 +0300 Subject: [PVE-User] ceph osd tree & destroy_cephfs Message-ID: Hi to all. In my Proxmox Cluster? with 7 node I try to change some Pgs, Target Ratio and some Target size on some pool. MAX AVAIL on important pool not changed, I think if I destroy 2 pool on ceph is changed. I read the instructions https://pve.proxmox.com/pve-docs/chapter-pveceph.html#_destroy_cephfs , I need ask if I destroy CephFS pool is will affect other pools ? For now I not have there some data not used it for backup or something other data. For now I have : ceph df --- RAW STORAGE --- CLASS? SIZE???? AVAIL?? USED???? RAW USED? %RAW USED hdd??? 106 TiB? 98 TiB? 8.0 TiB?? 8.1 TiB?????? 7.58 TOTAL? 106 TiB? 98 TiB? 8.0 TiB?? 8.1 TiB?????? 7.58 --- POOLS --- POOL?????????????????? ID? PGS? STORED?? OBJECTS? USED???? %USED MAX AVAIL device_health_metrics?? 1??? 1?? 16 MiB?????? 22?? 32 MiB 0???? 46 TiB vm.pool???????????????? 2? 512? 2.7 TiB? 740.12k? 8.0 TiB 7.99???? 31 TiB cephfs_data???????????? 3?? 32? 1.9 KiB??????? 0? 3.8 KiB 0???? 46 TiB cephfs_metadata???????? 4??? 2?? 23 MiB?????? 28?? 47 MiB 0???? 46 TiB And one other question below is my ceph osd tree, like you see some osd the? REWEIGHT is less the default 1.00000 Suggest me how I change the REWEIGHT on this osd? ID?? CLASS? WEIGHT???? TYPE NAME??????????? STATUS? REWEIGHT PRI-AFF ?-1???????? 106.43005? root default -13????????? 14.55478????? host pve3101 ?10??? hdd??? 7.27739????????? osd.10?????????? up?? 1.00000 1.00000 ?11??? hdd??? 7.27739????????? osd.11?????????? up?? 1.00000 1.00000 -11????????? 14.55478????? host pve3103 ? 8??? hdd??? 7.27739????????? osd.8??????????? up?? 1.00000 1.00000 ? 9??? hdd??? 7.27739????????? osd.9??????????? up?? 1.00000 1.00000 ?-3????????? 14.55478????? host pve3105 ? 0??? hdd??? 7.27739????????? osd.0??????????? up?? 1.00000 1.00000 ? 1??? hdd??? 7.27739????????? osd.1??????????? up?? 1.00000 1.00000 ?-5????????? 14.55478????? host pve3107 *? 2??? hdd??? 7.27739????????? osd.2??????????? up?? 0.95001 1.00000* ? 3??? hdd??? 7.27739????????? osd.3??????????? up?? 1.00000 1.00000 ?-9????????? 14.55478????? host pve3108 ? 6??? hdd??? 7.27739????????? osd.6??????????? up?? 1.00000 1.00000 ? 7??? hdd??? 7.27739????????? osd.7??????????? up?? 1.00000 1.00000 ?-7????????? 14.55478????? host pve3109 ? 4??? hdd??? 7.27739????????? osd.4??????????? up?? 1.00000 1.00000 ? 5??? hdd??? 7.27739????????? osd.5??????????? up?? 1.00000 1.00000 -15????????? 19.10138????? host pve3111 ?12??? hdd?? 10.91409????????? osd.12?????????? up?? 1.00000 1.00000 *?13??? hdd??? 0.90970????????? osd.13?????????? up?? 0.76846 1.00000* ?14??? hdd??? 0.90970????????? osd.14?????????? up?? 1.00000 1.00000 ?15??? hdd??? 0.90970????????? osd.15?????????? up?? 1.00000 1.00000 ?16??? hdd??? 0.90970????????? osd.16?????????? up?? 1.00000 1.00000 ?17??? hdd??? 0.90970????????? osd.17?????????? up?? 1.00000 1.00000 *?18??? hdd??? 0.90970????????? osd.18?????????? up?? 0.75006 1.00000* ?19??? hdd??? 0.90970????????? osd.19?????????? up?? 1.00000 1.00000 ?20??? hdd??? 0.90970????????? osd.20?????????? up?? 1.00000 1.00000 ?21??? hdd??? 0.90970????????? osd.21?????????? up?? 1.00000 1.00000 Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From dziobek at hlrs.de Fri Feb 4 13:39:29 2022 From: dziobek at hlrs.de (Martin Dziobek) Date: Fri, 4 Feb 2022 13:39:29 +0100 Subject: [PVE-User] Proxmox and ZFS on large JBOD ? In-Reply-To: <20220110141350.716d9727@schleppmd.hlrs.de> References: <20220110141350.716d9727@schleppmd.hlrs.de> Message-ID: <20220204133929.4c7bdb80@schleppmd.hlrs.de> Thanks for all your input ! I tried TrueNAS Scale, but switched to TrueNAS Core, because Scale is still somewhat Beta, and license conditions are unclear. After discovering the possibility to run VMs in TrueNAS, it came to my mind to run the Proxmox Backup Server in a VM on TrueNas, with an underlying TrueNAS Volume as a Backup media. Lets see ... Best regards, Martin On Mon, 10 Jan 2022 14:13:50 +0100 Martin Dziobek wrote: > Hi pve-users ! > > Does anybody has experiences if proxmox works > flawlessly to manage a large zfs volume consisting > of a SAS-connected JBOD of 60 * 1TB-HDDs ? > > Right now, management is done with a regular > Debian 11 installation, and rebooting the thing > always ends up in a timeout mess at network startup, > because it takes ages to enumerate all those member disks, > import the zpool and export it via NFS. > > I am considering to install Proxmox on this server for the > sole purpose of smooth startup and management operation. > Might that be a stable solution ? > > Best regards, > Martin > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Fri Feb 4 16:14:15 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 4 Feb 2022 16:14:15 +0100 Subject: PBS - dirty bitmaps and multiple PBS storagtes Message-ID: Hi, We have setup two PBS storages in a PVE cluster, one PBS is local and the other PBS is remote. We are doing this because we don't want the remote PBS to be able to reach local LAN, so we can't use a Sync in remote PBS. Setup is working fine, but I noticed that dirty-bitmap is being recreated each time a backup is performed (both backups are daily). Would it be possible for PVE to create dirty-bitmaps per backup storage/PBS storage? That would make this kind of setups more efficient (much less disk read and CPU use during backups). Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From dietmar at proxmox.com Fri Feb 4 16:46:22 2022 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 4 Feb 2022 16:46:22 +0100 (CET) Subject: [PVE-User] PBS - dirty bitmaps and multiple PBS storagtes Message-ID: <1564786117.3421.1643989582334@webmail.proxmox.com> >Would it be possible for PVE to create dirty-bitmaps per backup storage/PBS storage? That would make this kind of setups more efficient We decided against that because this can be a big memory leak. Please notice that we can never free those bitmaps, so they can accumulate when the user changes storage over time... From elacunza at binovo.es Fri Feb 4 17:21:51 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 4 Feb 2022 17:21:51 +0100 Subject: [PVE-User] PBS - dirty bitmaps and multiple PBS storagtes In-Reply-To: <1564786117.3421.1643989582334@webmail.proxmox.com> References: <1564786117.3421.1643989582334@webmail.proxmox.com> Message-ID: Hi Dietmar, El 4/2/22 a las 16:46, Dietmar Maurer escribi?: >> Would it be possible for PVE to create dirty-bitmaps per backup > storage/PBS storage? That would make this kind of setups more efficient > > We decided against that because this can be a big memory leak. Please notice that we can never free those bitmaps, so they can accumulate when the user changes storage over time... > Dirty-bitmaps are lost if VM is stopped or migrated, if I understood correctly? If so, leaks would be cleared on next node reboot (version upgrade or new kernel installed). But I understand the issue. Maybe a check in advanced settings of PBS storage with a warning to request a storage-speciffic dirty-bitmap...? :) Thanks! Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From max at maxhill.eu Sun Feb 6 21:50:47 2022 From: max at maxhill.eu (Maximilian Hill) Date: Sun, 6 Feb 2022 21:50:47 +0100 Subject: [PVE-User] Is there a way to securely delete images from Ceph? In-Reply-To: <7428448f-2aa9-5115-3832-e679108a56b6@gmail.com> References: <7428448f-2aa9-5115-3832-e679108a56b6@gmail.com> Message-ID: <3b90add4-c026-94ae-22eb-da5542f120c1@maxhill.eu> When you're running systems containing sensible data, you should care before you deploy. As far as I know, there's no way to guarantee, that you can't find anything on the OSD's block device. But when you delete snapshots and remove the rbd, it would be awful trying to reconstruct anything. On 1/24/22 11:16 AM, Uwe Sauter wrote: > Hi list, > > just a quick question: is there a way to securely (cryptographicly) erase images in Ceph? > Overwriting / shredding the VM's block device from a live ISO is probably not enough given that Ceph > might use Copy on Write. So does Ceph provide something alike? > > > Regards, > > Uwe > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From tonci at suma-informatika.hr Mon Feb 7 01:10:44 2022 From: tonci at suma-informatika.hr (=?UTF-8?B?VG9uxI1pIFN0aXBpxI1ldmnEhw==?=) Date: Mon, 7 Feb 2022 01:10:44 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS So, if I create zfs10 raid as boot drive , I have no more disks left for datastore, right ? BUT, is it possible to make directory on this / boot drive i.e. /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? and is there any disadvantages against separate 2 x 240g ssd as boot and 4 x 2T sata (zfs raid 10 ) for datastore? Actually I'ma having hard time finding 2 x 2,5" small? sata drives for boot .... ? Thank you very much in advance BR Tonci srda?an pozdrav / best regards Ton?i Stipi?evi?, dipl. ing. elektr. direktor / manager SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke Small & Medium Business IT Support / Management mob: 091 1234003 www.suma-informatika.hr From a.lauterer at proxmox.com Mon Feb 7 09:44:37 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 7 Feb 2022 09:44:37 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: A PBS datastore is just a directory somewhere. If you set up the system on a RAID 10 and want to use that as well for the datastore, I would create a new ZFS dataset for the datastore. This allows you to change certain ZFS properties on that dataset specifically. There are a few things to consider if you want to have the OS on the same disks as the datastore. How fast are the disks? HDDs or SSDs? If they are HDDs, then having the OS on separate disks is a good idea because HDDs will already be having a hard time to provide the IOPS for decent performance. If they are *decent* SSDs, you should be fine. No idea how easy it is to get stuff in Croatia, but this should give you some ideas: https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r 250 GiB is plenty of the OS, but anything smaller is hard to find or of dubious quality ;) Cheers, Aaron On 2/7/22 01:10, Ton?i Stipi?evi? wrote: > I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS > > So, if I create zfs10 raid as boot drive , I have no more disks left for datastore, right ? > > BUT, is it possible to make directory on this / boot drive i.e. /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? and is there any disadvantages against separate 2 x 240g ssd as boot and 4 x 2T sata (zfs raid 10 ) for datastore? > > Actually I'ma having hard time finding 2 x 2,5" small? sata drives for boot .... > > ? Thank you very much in advance > > BR > > Tonci > > srda?an pozdrav / best regards > > Ton?i Stipi?evi?, dipl. ing. elektr. > direktor / manager > > SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 > > Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke > Small & Medium Business IT Support / Management > > mob: 091 1234003 > www.suma-informatika.hr > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From a.lauterer at proxmox.com Mon Feb 7 09:44:37 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 7 Feb 2022 09:44:37 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: A PBS datastore is just a directory somewhere. If you set up the system on a RAID 10 and want to use that as well for the datastore, I would create a new ZFS dataset for the datastore. This allows you to change certain ZFS properties on that dataset specifically. There are a few things to consider if you want to have the OS on the same disks as the datastore. How fast are the disks? HDDs or SSDs? If they are HDDs, then having the OS on separate disks is a good idea because HDDs will already be having a hard time to provide the IOPS for decent performance. If they are *decent* SSDs, you should be fine. No idea how easy it is to get stuff in Croatia, but this should give you some ideas: https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r 250 GiB is plenty of the OS, but anything smaller is hard to find or of dubious quality ;) Cheers, Aaron On 2/7/22 01:10, Ton?i Stipi?evi? wrote: > I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS > > So, if I create zfs10 raid as boot drive , I have no more disks left for datastore, right ? > > BUT, is it possible to make directory on this / boot drive i.e. /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? and is there any disadvantages against separate 2 x 240g ssd as boot and 4 x 2T sata (zfs raid 10 ) for datastore? > > Actually I'ma having hard time finding 2 x 2,5" small? sata drives for boot .... > > ? Thank you very much in advance > > BR > > Tonci > > srda?an pozdrav / best regards > > Ton?i Stipi?evi?, dipl. ing. elektr. > direktor / manager > > SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 > > Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke > Small & Medium Business IT Support / Management > > mob: 091 1234003 > www.suma-informatika.hr > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From nada at verdnatura.es Mon Feb 7 10:55:18 2022 From: nada at verdnatura.es (nada) Date: Mon, 07 Feb 2022 10:55:18 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: <63d3af4d5a917c10c86b32d0b2b16cee@verdnatura.es> hi Tonci in case you do not have time and small disks you may create 3 ZFS mirrors so there will be for example rpool for PVE ctpool for CT/QM datapool for backups e.g. in your case rpool and ctpool may be on 2disks (sda and sdb) datapool will be on 2 disks (sdc and sdd) during PVE installation you may select ZFS allocate 2 disks in mirror and limit volume space to 300G after installation you may see status zpool list -v fdisk -l /dev/sda fdisk -l /dev/sdb ... PVE will be probably installed at sda1,sda2,sda3 and sdb1,sdb2,sdb3 respectively and rpool will be probably ZFS mirror over the partitions sda3 and sdb3 after that you may create partitions sda4 and sdb4 (type Solaris /usr & Apple ZFS) with zpool mirror over it zpool create -f -o ashift=12 ctpool mirror /dev/sda4 /dev/sdb4 after installation you may dedicate the rest 2 disks for datastore e.g. zpool create -f -o ashift=12 datapool mirror /dev/sdc /dev/sdd finally you will declare your PVE storage and you may start to play ;-) hope it helps Nada On 2022-02-07 01:10, Ton?i Stipi?evi? wrote: > I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS > > So, if I create zfs10 raid as boot drive , I have no more disks left > for datastore, right ? > > BUT, is it possible to make directory on this / boot drive i.e. > /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? > and is there any disadvantages against separate 2 x 240g ssd as boot > and 4 x 2T sata (zfs raid 10 ) for datastore? > > Actually I'ma having hard time finding 2 x 2,5" small? sata drives for > boot .... > > ? Thank you very much in advance > > BR > > Tonci > > srda?an pozdrav / best regards > > Ton?i Stipi?evi?, dipl. ing. elektr. > direktor / manager > > SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 > > Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke > Small & Medium Business IT Support / Management > > mob: 091 1234003 > www.suma-informatika.hr > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From tonci at suma-informatika.hr Mon Feb 7 12:24:53 2022 From: tonci at suma-informatika.hr (=?UTF-8?B?VG9uxI1pIFN0aXBpxI1ldmnEhw==?=) Date: Mon, 7 Feb 2022 12:24:53 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: Thank you Aaron, this atribut "small" is blocking point ... like you said it is hard to find sata disk under 250G I was also thinking about 2 x sata-dom 32G/64G ? for zfs boot ? Does it make any sense (performance/relayibilty-wise? Ok , today I'm gonna install PBS on 4x2T (disks, not ssds) zfs raid10? and create one new dataset in /rpool? -> /rpool/databck??? ...? So , the PBS itself and datastore will reside on the same pool ...? Then I'll make some copmarison tests (backup/restore)? and report back some relevant results? ...? I'd like to avoid PBS on its own pool because this SuperMicro 721 case has only 4 disk slots , a additional ones (2) have to be connected directly to MB ... 'till then BR Tonci srda?an pozdrav / best regards Ton?i Stipi?evi?, dipl. ing. elektr. direktor / manager SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke Small & Medium Business IT Support / Management mob: 091 1234003 www.suma-informatika.hr On 07. 02. 2022. 09:44, Aaron Lauterer wrote: > A PBS datastore is just a directory somewhere. > > If you set up the system on a RAID 10 and want to use that as well for > the datastore, I would create a new ZFS dataset for the datastore. > This allows you to change certain ZFS properties on that dataset > specifically. > > There are a few things to consider if you want to have the OS on the > same disks as the datastore. > How fast are the disks? HDDs or SSDs? > If they are HDDs, then having the OS on separate disks is a good idea > because HDDs will already be having a hard time to provide the IOPS > for decent performance. If they are *decent* SSDs, you should be fine. > > No idea how easy it is to get stuff in Croatia, but this should give > you some ideas: > https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r > > 250 GiB is plenty of the OS, but anything smaller is hard to find or > of dubious quality ;) > > Cheers, > Aaron > > > On 2/7/22 01:10, Ton?i Stipi?evi? wrote: >> I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS >> >> So, if I create zfs10 raid as boot drive , I have no more disks left >> for datastore, right ? >> >> BUT, is it possible to make directory on this / boot drive i.e. >> /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? >> and is there any disadvantages against separate 2 x 240g ssd as boot >> and 4 x 2T sata (zfs raid 10 ) for datastore? >> >> Actually I'ma having hard time finding 2 x 2,5" small? sata drives >> for boot .... >> >> ?? Thank you very much in advance >> >> BR >> >> Tonci >> >> srda?an pozdrav / best regards >> >> Ton?i Stipi?evi?, dipl. ing. elektr. >> direktor / manager >> >> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >> >> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >> Small & Medium Business IT Support / Management >> >> mob: 091 1234003 >> www.suma-informatika.hr >> >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From tonci at suma-informatika.hr Mon Feb 7 12:24:53 2022 From: tonci at suma-informatika.hr (=?UTF-8?B?VG9uxI1pIFN0aXBpxI1ldmnEhw==?=) Date: Mon, 7 Feb 2022 12:24:53 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: Thank you Aaron, this atribut "small" is blocking point ... like you said it is hard to find sata disk under 250G I was also thinking about 2 x sata-dom 32G/64G ? for zfs boot ? Does it make any sense (performance/relayibilty-wise? Ok , today I'm gonna install PBS on 4x2T (disks, not ssds) zfs raid10? and create one new dataset in /rpool? -> /rpool/databck??? ...? So , the PBS itself and datastore will reside on the same pool ...? Then I'll make some copmarison tests (backup/restore)? and report back some relevant results? ...? I'd like to avoid PBS on its own pool because this SuperMicro 721 case has only 4 disk slots , a additional ones (2) have to be connected directly to MB ... 'till then BR Tonci srda?an pozdrav / best regards Ton?i Stipi?evi?, dipl. ing. elektr. direktor / manager SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke Small & Medium Business IT Support / Management mob: 091 1234003 www.suma-informatika.hr On 07. 02. 2022. 09:44, Aaron Lauterer wrote: > A PBS datastore is just a directory somewhere. > > If you set up the system on a RAID 10 and want to use that as well for > the datastore, I would create a new ZFS dataset for the datastore. > This allows you to change certain ZFS properties on that dataset > specifically. > > There are a few things to consider if you want to have the OS on the > same disks as the datastore. > How fast are the disks? HDDs or SSDs? > If they are HDDs, then having the OS on separate disks is a good idea > because HDDs will already be having a hard time to provide the IOPS > for decent performance. If they are *decent* SSDs, you should be fine. > > No idea how easy it is to get stuff in Croatia, but this should give > you some ideas: > https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r > > 250 GiB is plenty of the OS, but anything smaller is hard to find or > of dubious quality ;) > > Cheers, > Aaron > > > On 2/7/22 01:10, Ton?i Stipi?evi? wrote: >> I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS >> >> So, if I create zfs10 raid as boot drive , I have no more disks left >> for datastore, right ? >> >> BUT, is it possible to make directory on this / boot drive i.e. >> /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go? >> and is there any disadvantages against separate 2 x 240g ssd as boot >> and 4 x 2T sata (zfs raid 10 ) for datastore? >> >> Actually I'ma having hard time finding 2 x 2,5" small? sata drives >> for boot .... >> >> ?? Thank you very much in advance >> >> BR >> >> Tonci >> >> srda?an pozdrav / best regards >> >> Ton?i Stipi?evi?, dipl. ing. elektr. >> direktor / manager >> >> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >> >> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >> Small & Medium Business IT Support / Management >> >> mob: 091 1234003 >> www.suma-informatika.hr >> >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From a.lauterer at proxmox.com Mon Feb 7 13:36:00 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 7 Feb 2022 13:36:00 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: On 2/7/22 12:24, Ton?i Stipi?evi? wrote: > Thank you Aaron, > > this atribut "small" is blocking point ... like you said it is hard to find sata disk under 250G > > I was also thinking about 2 x sata-dom 32G/64G ? for zfs boot ? Does it make any sense (performance/relayibilty-wise? SATA DOMs need to be considered carefully. If they can last long enough, then why not. But if they are just cheap SSDs that will wear out fast... well. > > Ok , today I'm gonna install PBS on 4x2T (disks, not ssds) zfs raid10 and create one new dataset in /rpool? -> /rpool/databck??? ...? So , the PBS itself and datastore will reside on the same pool ...? Then I'll make some copmarison tests (backup/restore)? and report back some relevant results? ...? I'd like to avoid PBS on its own pool because this SuperMicro 721 case has only 4 disk slots , a additional ones (2) have to be connected directly to MB ... > > > 'till then > > BR > > Tonci > > srda?an pozdrav / best regards > > Ton?i Stipi?evi?, dipl. ing. elektr. > direktor / manager > > SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 > > Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke > Small & Medium Business IT Support / Management > > mob: 091 1234003 > www.suma-informatika.hr > > On 07. 02. 2022. 09:44, Aaron Lauterer wrote: >> A PBS datastore is just a directory somewhere. >> >> If you set up the system on a RAID 10 and want to use that as well for the datastore, I would create a new ZFS dataset for the datastore. This allows you to change certain ZFS properties on that dataset specifically. >> >> There are a few things to consider if you want to have the OS on the same disks as the datastore. >> How fast are the disks? HDDs or SSDs? >> If they are HDDs, then having the OS on separate disks is a good idea because HDDs will already be having a hard time to provide the IOPS for decent performance. If they are *decent* SSDs, you should be fine. >> >> No idea how easy it is to get stuff in Croatia, but this should give you some ideas: https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r >> >> 250 GiB is plenty of the OS, but anything smaller is hard to find or of dubious quality ;) >> >> Cheers, >> Aaron >> >> >> On 2/7/22 01:10, Ton?i Stipi?evi? wrote: >>> I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS >>> >>> So, if I create zfs10 raid as boot drive , I have no more disks left for datastore, right ? >>> >>> BUT, is it possible to make directory on this / boot drive i.e. /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go and is there any disadvantages against separate 2 x 240g ssd as boot and 4 x 2T sata (zfs raid 10 ) for datastore? >>> >>> Actually I'ma having hard time finding 2 x 2,5" small? sata drives for boot .... >>> >>> ?? Thank you very much in advance >>> >>> BR >>> >>> Tonci >>> >>> srda?an pozdrav / best regards >>> >>> Ton?i Stipi?evi?, dipl. ing. elektr. >>> direktor / manager >>> >>> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >>> >>> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >>> Small & Medium Business IT Support / Management >>> >>> mob: 091 1234003 >>> www.suma-informatika.hr >>> >>> >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > From tonci at suma-informatika.hr Mon Feb 7 13:42:42 2022 From: tonci at suma-informatika.hr (=?UTF-8?B?VG9uxI1pIFN0aXBpxI1ldmnEhw==?=) Date: Mon, 7 Feb 2022 13:42:42 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: Yes, I'm aware of that too ... Obviously this's? not been? good-practice and you do not have any recommended models/types ? But they are very "practical" though :) srda?an pozdrav / best regards Ton?i Stipi?evi?, dipl. ing. elektr. direktor / manager SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke Small & Medium Business IT Support / Management mob: 091 1234003 www.suma-informatika.hr On 07. 02. 2022. 13:36, Aaron Lauterer wrote: > > > On 2/7/22 12:24, Ton?i Stipi?evi? wrote: >> Thank you Aaron, >> >> this atribut "small" is blocking point ... like you said it is hard >> to find sata disk under 250G >> >> I was also thinking about 2 x sata-dom 32G/64G ? for zfs boot ? Does >> it make any sense (performance/relayibilty-wise? > > SATA DOMs need to be considered carefully. If they can last long > enough, then why not. But if they are just cheap SSDs that will wear > out fast... well. > >> >> Ok , today I'm gonna install PBS on 4x2T (disks, not ssds) zfs raid10 >> and create one new dataset in /rpool? -> /rpool/databck??? ...? So , >> the PBS itself and datastore will reside on the same pool ...? Then >> I'll make some copmarison tests (backup/restore)? and report back >> some relevant results ...? I'd like to avoid PBS on its own pool >> because this SuperMicro 721 case has only 4 disk slots , a additional >> ones (2) have to be connected directly to MB ... >> >> >> 'till then >> >> BR >> >> Tonci >> >> srda?an pozdrav / best regards >> >> Ton?i Stipi?evi?, dipl. ing. elektr. >> direktor / manager >> >> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >> >> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >> Small & Medium Business IT Support / Management >> >> mob: 091 1234003 >> www.suma-informatika.hr >> >> On 07. 02. 2022. 09:44, Aaron Lauterer wrote: >>> A PBS datastore is just a directory somewhere. >>> >>> If you set up the system on a RAID 10 and want to use that as well >>> for the datastore, I would create a new ZFS dataset for the >>> datastore. This allows you to change certain ZFS properties on that >>> dataset specifically. >>> >>> There are a few things to consider if you want to have the OS on the >>> same disks as the datastore. >>> How fast are the disks? HDDs or SSDs? >>> If they are HDDs, then having the OS on separate disks is a good >>> idea because HDDs will already be having a hard time to provide the >>> IOPS for decent performance. If they are *decent* SSDs, you should >>> be fine. >>> >>> No idea how easy it is to get stuff in Croatia, but this should give >>> you some ideas: >>> https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r >>> >>> 250 GiB is plenty of the OS, but anything smaller is hard to find or >>> of dubious quality ;) >>> >>> Cheers, >>> Aaron >>> >>> >>> On 2/7/22 01:10, Ton?i Stipi?evi? wrote: >>>> I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS >>>> >>>> So, if I create zfs10 raid as boot drive , I have no more disks >>>> left for datastore, right ? >>>> >>>> BUT, is it possible to make directory on this / boot drive i.e. >>>> /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go >>>> and is there any disadvantages against separate 2 x 240g ssd as >>>> boot and 4 x 2T sata (zfs raid 10 ) for datastore? >>>> >>>> Actually I'ma having hard time finding 2 x 2,5" small? sata drives >>>> for boot .... >>>> >>>> ?? Thank you very much in advance >>>> >>>> BR >>>> >>>> Tonci >>>> >>>> srda?an pozdrav / best regards >>>> >>>> Ton?i Stipi?evi?, dipl. ing. elektr. >>>> direktor / manager >>>> >>>> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >>>> >>>> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >>>> Small & Medium Business IT Support / Management >>>> >>>> mob: 091 1234003 >>>> www.suma-informatika.hr >>>> >>>> >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at lists.proxmox.com >>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> > From elacunza at binovo.es Tue Feb 8 12:25:31 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 8 Feb 2022 12:25:31 +0100 Subject: [PVE-User] PBS - dirty bitmaps and multiple PBS storagtes In-Reply-To: References: <1564786117.3421.1643989582334@webmail.proxmox.com> Message-ID: <7315b262-b74f-e3b6-819c-64d77e133c25@binovo.es> Hi, I think this could be solved with a "Reverse Sync" functionality: - Currently Syncs are performed by a PBS that connects to another PBS and "pulls" datastore content to a local datastore. - "Reverse Sync" would instead connect to a remote PBS and "push" local datastore content to remote datastore. I don't know the technical details, but this seems like a backup task (with chunk checksums already calcultated and new backups created on remote with "arbitrary" names/timestamps. Do you think this would make sense? The situation would be: - Local PBS and PVE cluster. PVE backups to Local PBS. - Remote PBS ("Cloud") With Local PBS "Reverse Sync"ing to Remote PBS: - No static IP needed for Local PBS - No firewall config needed for Local PBS - No additional PVE backup task for remote backup -> no dirty map issues Cheers Eneko El 4/2/22 a las 17:21, Eneko Lacunza escribi?: > Hi Dietmar, > > El 4/2/22 a las 16:46, Dietmar Maurer escribi?: >>> Would it be possible for PVE to create dirty-bitmaps per backup >> storage/PBS storage? That would make this kind of setups more efficient >> >> We decided against that because this can be a big memory leak. Please notice that we can never free those bitmaps, so they can accumulate when the user changes storage over time... >> > Dirty-bitmaps are lost if VM is stopped or migrated, if I understood > correctly? > > If so, leaks would be cleared on next node reboot (version upgrade or > new kernel installed). > > But I understand the issue. Maybe a check in advanced settings of PBS > storage with a warning to request a storage-speciffic dirty-bitmap...? :) > > Thanks! Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From a.lauterer at proxmox.com Tue Feb 8 13:52:35 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Tue, 8 Feb 2022 13:52:35 +0100 Subject: [PVE-User] PBS datastore In-Reply-To: References: Message-ID: On 2/7/22 13:42, Ton?i Stipi?evi? wrote: > Yes, I'm aware of that too ... > > Obviously this's? not been? good-practice and you do not have any recommended models/types ? No, I actually never used any, only looked into them a few times, but never found anything that I really liked. Another thing though that could work, especially with newer boards, are NVMEs in m.2 format. The last server boards that I looked into and eventually also bought, have m.2 NVME slots. > > But they are very "practical" though :) > > srda?an pozdrav / best regards > > Ton?i Stipi?evi?, dipl. ing. elektr. > direktor / manager > > SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 > > Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke > Small & Medium Business IT Support / Management > > mob: 091 1234003 > www.suma-informatika.hr > > On 07. 02. 2022. 13:36, Aaron Lauterer wrote: >> >> >> On 2/7/22 12:24, Ton?i Stipi?evi? wrote: >>> Thank you Aaron, >>> >>> this atribut "small" is blocking point ... like you said it is hard to find sata disk under 250G >>> >>> I was also thinking about 2 x sata-dom 32G/64G ? for zfs boot ? Does it make any sense (performance/relayibilty-wise? >> >> SATA DOMs need to be considered carefully. If they can last long enough, then why not. But if they are just cheap SSDs that will wear out fast... well. >> >>> >>> Ok , today I'm gonna install PBS on 4x2T (disks, not ssds) zfs raid10 and create one new dataset in /rpool? -> /rpool/databck??? ...? So , the PBS itself and datastore will reside on the same pool ...? Then I'll make some copmarison tests (backup/restore)? and report back some relevant results ...? I'd like to avoid PBS on its own pool because this SuperMicro 721 case has only 4 disk slots , a additional ones (2) have to be connected directly to MB ... >>> >>> >>> 'till then >>> >>> BR >>> >>> Tonci >>> >>> srda?an pozdrav / best regards >>> >>> Ton?i Stipi?evi?, dipl. ing. elektr. >>> direktor / manager >>> >>> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >>> >>> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >>> Small & Medium Business IT Support / Management >>> >>> mob: 091 1234003 >>> www.suma-informatika.hr >>> >>> On 07. 02. 2022. 09:44, Aaron Lauterer wrote: >>>> A PBS datastore is just a directory somewhere. >>>> >>>> If you set up the system on a RAID 10 and want to use that as well for the datastore, I would create a new ZFS dataset for the datastore. This allows you to change certain ZFS properties on that dataset specifically. >>>> >>>> There are a few things to consider if you want to have the OS on the same disks as the datastore. >>>> How fast are the disks? HDDs or SSDs? >>>> If they are HDDs, then having the OS on separate disks is a good idea because HDDs will already be having a hard time to provide the IOPS for decent performance. If they are *decent* SSDs, you should be fine. >>>> >>>> No idea how easy it is to get stuff in Croatia, but this should give you some ideas: https://geizhals.eu/?cat=hdssd&xf=2028_256~4643_Power-Loss+Protection~4832_1~4836_2&sort=r >>>> >>>> 250 GiB is plenty of the OS, but anything smaller is hard to find or of dubious quality ;) >>>> >>>> Cheers, >>>> Aaron >>>> >>>> >>>> On 2/7/22 01:10, Ton?i Stipi?evi? wrote: >>>>> I'm planning to use SUPERMICRO 721 box (4 x 2T sata) as PBS >>>>> >>>>> So, if I create zfs10 raid as boot drive , I have no more disks left for datastore, right ? >>>>> >>>>> BUT, is it possible to make directory on this / boot drive i.e. /mnt/datastore?? and attach it as datastore ??? Is that a way-to-go and is there any disadvantages against separate 2 x 240g ssd as boot and 4 x 2T sata (zfs raid 10 ) for datastore? >>>>> >>>>> Actually I'ma having hard time finding 2 x 2,5" small? sata drives for boot .... >>>>> >>>>> ?? Thank you very much in advance >>>>> >>>>> BR >>>>> >>>>> Tonci >>>>> >>>>> srda?an pozdrav / best regards >>>>> >>>>> Ton?i Stipi?evi?, dipl. ing. elektr. >>>>> direktor / manager >>>>> >>>>> SUMA Informatika d.o.o., Badali?eva 27, OIB 93926415263 >>>>> >>>>> Podr?ka / Upravljanje IT sustavima za male i srednje tvrtke >>>>> Small & Medium Business IT Support / Management >>>>> >>>>> mob: 091 1234003 >>>>> www.suma-informatika.hr >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at lists.proxmox.com >>>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> >> > From damart.vidin at gmail.com Thu Feb 10 11:18:38 2022 From: damart.vidin at gmail.com (David Martin) Date: Thu, 10 Feb 2022 11:18:38 +0100 Subject: [PVE-User] Update proxmox Message-ID: Hi, I have 25 server's proxmox, and i need update one server who is in 5.14-4 version. Do you think if it's possible to update directly with apt update && apt upgrade in terminal ? -- david martin From tsabolov at t8.ru Thu Feb 10 11:52:57 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 10 Feb 2022 13:52:57 +0300 Subject: [PVE-User] Update proxmox In-Reply-To: References: Message-ID: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> Hi David, Before upgrade you need backup all? VM? or moved all to other host so as not to interrupt their work. After see this? documentation Upgrade from 5.x to 6.0 10.02.2022 13:18, David Martin ?????: > Hi, > > I have 25 server's proxmox, and i need update one server who is in 5.14-4 > version. > > Do you think if it's possible to update directly with apt update && apt > upgrade > in terminal ? > > > Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leesteken at protonmail.ch Thu Feb 10 12:35:23 2022 From: leesteken at protonmail.ch (Arjen) Date: Thu, 10 Feb 2022 11:35:23 +0000 Subject: [PVE-User] Update proxmox In-Reply-To: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> References: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> Message-ID: On Thursday, February 10th, 2022 at 11:52, ?????? ??????? wrote: > Hi David, > > Before upgrade you need backup all VM or moved all to other host so as > not to interrupt their work. > > After see this documentation Upgrade from 5.x to 6.0 > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 > > 10.02.2022 13:18, David Martin ?????: > > > Hi, > > > > I have 25 server's proxmox, and i need update one server who is in 5.14-4 > > version. > > > > Do you think if it's possible to update directly with apt update && apt > > upgrade > > in terminal ? > > Sergey TS Probably not but more importantly: Never use apt upgrade ! Always use apt dist-upgrade for Proxmox From damart.vidin at gmail.com Thu Feb 10 13:13:51 2022 From: damart.vidin at gmail.com (David Martin) Date: Thu, 10 Feb 2022 13:13:51 +0100 Subject: [PVE-User] Update proxmox In-Reply-To: References: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> Message-ID: Thank's you, Thank you for your reply, I will first put the system packages if there is not too much adhesion, and then update the proxmox packages. I use proxmox GUI as much as possible. apt-get install --only-upgrade Le jeu. 10 f?vr. 2022 ? 12:35, Arjen via pve-user < pve-user at lists.proxmox.com> a ?crit : > > > > ---------- Forwarded message ---------- > From: Arjen > To: Proxmox VE user list > Cc: > Bcc: > Date: Thu, 10 Feb 2022 11:35:23 +0000 > Subject: Re: [PVE-User] Update proxmox > On Thursday, February 10th, 2022 at 11:52, ?????? ??????? > wrote: > > > Hi David, > > > > Before upgrade you need backup all VM or moved all to other host so as > > not to interrupt their work. > > > > After see this documentation Upgrade from 5.x to 6.0 > > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 > > > > 10.02.2022 13:18, David Martin ?????: > > > > > Hi, > > > > > > I have 25 server's proxmox, and i need update one server who is in > 5.14-4 > > > version. > > > > > > Do you think if it's possible to update directly with apt update && apt > > > upgrade > > > in terminal ? > > > > Sergey TS > > Probably not but more importantly: Never use apt upgrade ! > Always use apt dist-upgrade for Proxmox > > > > > ---------- Forwarded message ---------- > From: Arjen via pve-user > To: Proxmox VE user list > Cc: Arjen > Bcc: > Date: Thu, 10 Feb 2022 11:35:23 +0000 > Subject: Re: [PVE-User] Update proxmox > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- david martin From alain.pean at c2n.upsaclay.fr Thu Feb 10 15:13:45 2022 From: alain.pean at c2n.upsaclay.fr (=?UTF-8?Q?Alain_P=c3=a9an?=) Date: Thu, 10 Feb 2022 15:13:45 +0100 Subject: [PVE-User] Update proxmox In-Reply-To: References: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> Message-ID: <47ba4e16-73c2-22b8-1549-375b53ba59c1@c2n.upsaclay.fr> Le 10/02/2022 ? 13:13, David Martin a ?crit?: > Thank you for your reply, > I will first put the system packages if there is not too much adhesion, and > then update the proxmox packages. > I use proxmox GUI as much as possible. > apt-get install --only-upgrade If I remember correctly, between 5.x and 6.x, there was a major upgrade of kvm and corosync, which resulted in the disparition of node VMs during the upgrade, that re-appear only at the end of the upgrade. And you have a new version of Debian when going from 5.x to 6.x, that requires changes in Debian repositories, and then again from 6.x to 7.x. I would think less risky to upgrade first to 6.x, then in a second step, to 7.x. Alain -- Administrateur Syst?me/R?seau C2N Centre de Nanosciences et Nanotechnologies (UMR 9001) Boulevard Thomas Gobert (ex Avenue de La Vauve), 91120 Palaiseau Tel : 01-70-27-06-88 Bureau A255 From damart.vidin at gmail.com Thu Feb 10 16:18:50 2022 From: damart.vidin at gmail.com (David Martin) Date: Thu, 10 Feb 2022 16:18:50 +0100 Subject: [PVE-User] Update proxmox In-Reply-To: <47ba4e16-73c2-22b8-1549-375b53ba59c1@c2n.upsaclay.fr> References: <075f6b81-6696-eb98-9665-214af1224d9d@t8.ru> <47ba4e16-73c2-22b8-1549-375b53ba59c1@c2n.upsaclay.fr> Message-ID: thank's to reply. Merci pour ce retour Cheers Le jeu. 10 f?vr. 2022 ? 15:23, Alain P?an a ?crit : > Le 10/02/2022 ? 13:13, David Martin a ?crit : > > Thank you for your reply, > > I will first put the system packages if there is not too much adhesion, > and > > then update the proxmox packages. > > I use proxmox GUI as much as possible. > > apt-get install --only-upgrade > > If I remember correctly, between 5.x and 6.x, there was a major upgrade > of kvm and corosync, which resulted in the disparition of node VMs > during the upgrade, that re-appear only at the end of the upgrade. > > And you have a new version of Debian when going from 5.x to 6.x, that > requires changes in Debian repositories, and then again from 6.x to 7.x. > > I would think less risky to upgrade first to 6.x, then in a second step, > to 7.x. > > Alain > > -- > Administrateur Syst?me/R?seau > C2N Centre de Nanosciences et Nanotechnologies (UMR 9001) > Boulevard Thomas Gobert (ex Avenue de La Vauve), 91120 Palaiseau > Tel : 01-70-27-06-88 Bureau A255 > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- david martin From gaio at lilliput.linux.it Fri Feb 11 12:03:54 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Fri, 11 Feb 2022 12:03:54 +0100 Subject: [PVE-User] Old, 32bit, debian and clock drift. Message-ID: PVE 6.4, updated to the latest patch. I've P2V an old debian squeeze box, 32bit, that have an impressive clock drift, even if they had 'ntpd' up&running. I've tried to remove '/etc/adjtime' and reboot, but nothing changed. The strange thing is i got in logs: Feb 10 15:15:37 sdinny kernel: [ 9481.004112] Clocksource tsc unstable (delta = 63065327 ns) but: sdinny:~# cat /sys/devices/system/clocksource/clocksource0/available_clocksource kvm-clock hpet acpi_pm sdinny:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock So 'tsc' is even not listed on clocksources, nor are used. What i'm missing?! Thanks. -- ``... La memoria conta veramente solo se tiene insieme l'impronta del presente e il progetto del futuro, se permette di fare senza dimenticare quel che si voleva fare, di diventare senza smettere di essere, di essere senza smettere di diventare...'' (Italo Calvino) From leesteken at protonmail.ch Sat Feb 12 12:06:06 2022 From: leesteken at protonmail.ch (Arjen) Date: Sat, 12 Feb 2022 11:06:06 +0000 Subject: [PVE-User] Old, 32bit, debian and clock drift. In-Reply-To: References: Message-ID: On Friday, February 11th, 2022 at 12:03, Marco Gaiarin wrote: > PVE 6.4, updated to the latest patch. > > > I've P2V an old debian squeeze box, 32bit, that have an impressive clock > drift, even if they had 'ntpd' up&running. > > I've tried to remove '/etc/adjtime' and reboot, but nothing changed. > > The strange thing is i got in logs: > > Feb 10 15:15:37 sdinny kernel: [ 9481.004112] Clocksource tsc unstable (delta = 63065327 ns) > > but: > sdinny:~# cat /sys/devices/system/clocksource/clocksource0/available_clocksource > kvm-clock hpet acpi_pm > sdinny:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource > kvm-clock > > So 'tsc' is even not listed on clocksources, nor are used. Possibly both the para-virtualized clock source and the ntp daemon are adjusting for drift and therefore overcompensating and making everything worse. Try disabling ntpd, stop and start the VM and see if that works better? It works well for my VMs, but they are not running 24/7. From gaio at lilliput.linux.it Sun Feb 13 21:14:24 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Sun, 13 Feb 2022 21:14:24 +0100 Subject: [PVE-User] Old, 32bit, debian and clock drift. In-Reply-To: ; from SmartGate on Sun, Feb 13, 2022 at 21:36:01PM +0100 References: Message-ID: Mandi! Arjen via pve-user In chel di` si favelave... > Possibly both the para-virtualized clock source and the ntp daemon are adjusting for drift and therefore overcompensating and making everything worse. > Try disabling ntpd, stop and start the VM and see if that works better? It works well for my VMs, but they are not running 24/7. Ahem, the VM in question IS the NTP server for that site... but i've a dozens of site like that, and this seems the only one with a clock drift... -- Di questa cavolo di pianura, di questa gente senza misura, che gia` confonde la notte e il giorno, e la partenza con il ritorno, e la richezza con i rumore, ed il diritto con il favore (F. De Gregori) From leesteken at protonmail.ch Sun Feb 13 22:07:38 2022 From: leesteken at protonmail.ch (Arjen) Date: Sun, 13 Feb 2022 21:07:38 +0000 Subject: [PVE-User] Old, 32bit, debian and clock drift. In-Reply-To: References: Message-ID: On Sunday, February 13th, 2022 at 21:14, Marco Gaiarin wrote: > Mandi! Arjen via pve-user > > In chel di` si favelave... > > > Possibly both the para-virtualized clock source and the ntp daemon are adjusting for drift and therefore overcompensating and making everything worse. > > > > Try disabling ntpd, stop and start the VM and see if that works better? It works well for my VMs, but they are not running 24/7. > > Ahem, the VM in question IS the NTP server for that site... but i've a > dozens of site like that, and this seems the only one with a clock drift... Sorry, I must have missed that. Other people also encountered your problem: https://v13.gr/2016/02/15/running-an-ntp-server-in-a-vm-using-kvm/ Instead of disabling ntpd, do not use the kvm-clock source on the NTP server VM. Either way, make sure not to combine the two. From wolf at wolfspyre.com Sun Feb 13 23:23:45 2022 From: wolf at wolfspyre.com (Wolf Noble) Date: Sun, 13 Feb 2022 16:23:45 -0600 Subject: [PVE-User] Installing proxmox such that it boots from internal sdcard (dell r720)? .. root on sas drives Message-ID: <31F64522-29EC-45FA-8A4B-4ED7DA1C2998@wolfspyre.com> Howdy all! is there a way to install proxmox such that the boot filesystem and bootloader is installed on the dual mirrored sd cards that are available on the dell 12g servers? I don't want to have the entirety of the root filesystem on them (waaaaaaay too slow) but having /boot and the initial bootloader on an isolated media would make it much easier to have a zfs root.... as well as having the root fs on fast media that might not be a supportable boot option. I'n not seeing an obvious way to configure this, but that doesn't mean it's not there hiding (probably in plain sight and I'm blind) Wolf Noble Hoof & Paw wolf at wolfspyre.com [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] From leesteken at protonmail.ch Mon Feb 14 08:08:47 2022 From: leesteken at protonmail.ch (Arjen) Date: Mon, 14 Feb 2022 07:08:47 +0000 Subject: [PVE-User] Installing proxmox such that it boots from internal sdcard (dell r720)? .. root on sas drives In-Reply-To: <31F64522-29EC-45FA-8A4B-4ED7DA1C2998@wolfspyre.com> References: <31F64522-29EC-45FA-8A4B-4ED7DA1C2998@wolfspyre.com> Message-ID: On Sunday, February 13th, 2022 at 23:23, Wolf Noble wrote: > Howdy all! > > is there a way to install proxmox such that the boot filesystem and bootloader is installed on the dual mirrored sd cards that are available on the dell 12g servers? > > I don't want to have the entirety of the root filesystem on them (waaaaaaay too slow) but having /boot and the initial bootloader on an isolated media would make it much easier to have a zfs root.... as well as having the root fs on fast media that might not be a supportable boot option. > > I'n not seeing an obvious way to configure this, but that doesn't mean it's not there hiding (probably in plain sight and I'm blind) You can add ESP partitions on the SD-cards with the proxmox-boot-tool[0] after installation, and remove the other ESP partitions. I think that is enough for Proxmox to (start to) boot and load ZFS drivers, but I don't think it includes /boot. [0]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_proxmox_boot_tool best regards, Arjen From mark at tuxis.nl Tue Feb 15 12:03:30 2022 From: mark at tuxis.nl (Mark Schouten) Date: Tue, 15 Feb 2022 12:03:30 +0100 Subject: [PBS] Adding IBM Tivoli Storage Manager as virtual tape device? Message-ID: Hi, Kinda off-topic, but since there isn?t a PBS-user mailinglist, I?ll just try it here. Is anyone aware if it is possible to add a IBM TSM appliance as a virtual tape device to Proxmox Backup Server? Please note that this question might be extremely stupid, but I?m just being lazy here. :) ? Mark Schouten, CTO Tuxis B.V. mark at tuxis.nl From s.ivanov at proxmox.com Wed Feb 16 09:36:34 2022 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Wed, 16 Feb 2022 09:36:34 +0100 Subject: [PVE-User] Installing proxmox such that it boots from internal sdcard (dell r720)? .. root on sas drives In-Reply-To: <31F64522-29EC-45FA-8A4B-4ED7DA1C2998@wolfspyre.com> References: <31F64522-29EC-45FA-8A4B-4ED7DA1C2998@wolfspyre.com> Message-ID: <20220216093634.56554ae9@rosa.proxmox.com> Hello, On Sun, 13 Feb 2022 16:23:45 -0600 Wolf Noble wrote: > Howdy all! > > is there a way to install proxmox such that the boot filesystem and bootloader is installed on the dual mirrored sd cards that are available on the dell 12g servers? Has been a while since I dealt with those machines - and cannot verify this here - but if you cannot boot from the disks in the internal bays you could try the following: * start the PVE installer in debug mode * let the installer run (exit the first 2 debug shells, and install regularly) * after the installation is done you get another debug shell * there re-import the zfs rpool, bind mount what's necessary and chroot into the new system - see [0] for steps. * inside format and init the sd-cards (if you want you can also create partitions on them - assuming the sd-card presents itself to the OS as /dev/sdX run: ** proxmox-boot-tool format /dev/sdX ** proxmox-boot-tool init /dev/sdX * exit the chroot, umount, export rpool * try booting from the sd-card but this is just from memory - have not tried this and am not sure if it will work smoothly. Good luck, stoiko [0] https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool#Repairing_a_System_Stuck_in_the_GRUB_Rescue_Shell > > I don't want to have the entirety of the root filesystem on them (waaaaaaay too slow) but having /boot and the initial bootloader on an isolated media would make it much easier to have a zfs root.... as well as having the root fs on fast media that might not be a supportable boot option. > > > I'n not seeing an obvious way to configure this, but that doesn't mean it's not there hiding (probably in plain sight and I'm blind) > > > > > > > Wolf Noble > Hoof & Paw > wolf at wolfspyre.com > > [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From tsabolov at t8.ru Wed Feb 16 09:52:03 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 16 Feb 2022 11:52:03 +0300 Subject: [PVE-User] New Disk on one node of Cluster. Message-ID: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> Hi to all. I have 7 node's PVE Cluster + Ceph storage In 7 node I add new 2 disks and want to make specific new osd pool on Ceph. Is possible with new? disk create specific pool ? Thanks Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Feb 16 09:58:30 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 16 Feb 2022 09:58:30 +0100 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> Message-ID: <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> Hi Sergey, El 16/2/22 a las 9:52, ?????? ??????? escribi?: > > I have 7 node's PVE Cluster + Ceph storage > > In 7 node I add new 2 disks and want to make specific new osd pool on > Ceph. > > Is possible with new? disk create specific pool ? You are adding 2 additional disk in each node, right? You can assign them to a new pool, creating custom crush rules. Why do you want to use those disks for a different pool? What disks do you have now, and what disk are the new? (for example, are all HDD or SSD...) Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From a.lauterer at proxmox.com Wed Feb 16 10:03:11 2022 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Wed, 16 Feb 2022 10:03:11 +0100 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> Message-ID: <30fc5e92-cc52-b091-ca2f-d510884c2eeb@proxmox.com> You will need to use device classes. They are either set automatically depending on the type (HDD, SSD, NVME) or you can define your own. If you create the OSDs via the Proxmox VE GUI, you can just type in a new device class name instead of selecting one of the predefined ones. You then need to create rules that target the different device classes, as the default replicated rule will use all OSDs. Then assign all your pools the appropriate rule for the device class that they should use. The Ceph docs have more details in how to change the device class of an existing OSD and how to create those rules: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes Cheers, Aaron On 2/16/22 09:52, ?????? ??????? wrote: > Hi to all. > > I have 7 node's PVE Cluster + Ceph storage > > In 7 node I add new 2 disks and want to make specific new osd pool on Ceph. > > Is possible with new? disk create specific pool ? > > Thanks > > Sergey TS > The best Regard > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From tsabolov at t8.ru Wed Feb 16 10:24:04 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 16 Feb 2022 12:24:04 +0300 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> Message-ID: Hi Eneko, 16.02.2022 11:58, Eneko Lacunza ?????: > Hi Sergey, > > El 16/2/22 a las 9:52, ?????? ??????? escribi?: >> >> I have 7 node's PVE Cluster + Ceph storage >> >> In 7 node I add new 2 disks and want to make specific new osd pool on >> Ceph. >> >> Is possible with new? disk create specific pool ? > > You are adding 2 additional disk in each node, right? No, I add the new disk on node 7, not on each node of cluster. > > You can assign them to a new pool, creating custom crush rules. Yes this I know how is make new rules In one node I for test added 2 ssd disk and make the new rules| | |ceph osd crush rule create-replicated replicated_ssd default host ssd? and with this rule I? make new pool vm.ssd | > > Why do you want to use those disks for a different pool? What disks do > you have now, and what disk are the new? (for example, are all HDD or > SSD...) I want make new pool with HDD - SAS for specific storage of some Windows Server VM. In existing pools : vm.pool? base pool for VM disks cephfs_data? some disks and ISO and other datas vm.ssd?? new pool I make from 2 ssd disk I try to test the Windows Server disk speed for Read/Write and RND4K Q32T1 with CrystalDiskMark 8.0.4x64 If I configure the VM disk to Sata and SSD emulation,Cache: Write back and Discard, Speed write/read is very good? something like : SEQ1M Q8T1 1797.43/1713.07 SEQ1M Q1T1 1790.77/1350.55 but the RND4K Q32T1 and RND4K Q1T1 is not good, very small. After the test I think if I add 2 new disks,? configure is for specific pool maybe my speed for? RND4K Q32T1 and RND4K Q1T1 maybe they will get better Thank you > > Cheers > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Tel. +34 943 569 206 |https://www.binovo.es > Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun > > https://www.youtube.com/user/CANALBINOVO > https://www.linkedin.com/company/37269706/ Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From tsabolov at t8.ru Wed Feb 16 10:29:35 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 16 Feb 2022 12:29:35 +0300 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <30fc5e92-cc52-b091-ca2f-d510884c2eeb@proxmox.com> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <30fc5e92-cc52-b091-ca2f-d510884c2eeb@proxmox.com> Message-ID: <6277a232-8bcc-3d4d-6b13-c09ec3ab4ceb@t8.ru> Hi Aaron, Thank you for answer, I make new rules: In one node I for test added 2 ssd disk and make the new rules|(replicated_ssd)|| | |ceph osd crush rule create-replicated replicated_ssd default host ssd? and with this rule I? make new pool vm.ssd | || | | | | |Cheers, | |Sergey| || 16.02.2022 12:03, Aaron Lauterer ?????: > You will need to use device classes. They are either set automatically > depending on the type (HDD, SSD, NVME) or you can define your own. If > you create the OSDs via the Proxmox VE GUI, you can just type in a new > device class name instead of selecting one of the predefined ones. > > You then need to create rules that target the different device > classes, as the default replicated rule will use all OSDs. Then assign > all your pools the appropriate rule for the device class that they > should use. > > The Ceph docs have more details in how to change the device class of > an existing OSD and how to create those rules: > https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes > > Cheers, > Aaron > > On 2/16/22 09:52, ?????? ??????? wrote: >> Hi to all. >> >> I have 7 node's PVE Cluster + Ceph storage >> >> In 7 node I add new 2 disks and want to make specific new osd pool on >> Ceph. >> >> Is possible with new? disk create specific pool ? >> >> Thanks >> >> Sergey TS >> The best Regard >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Feb 16 10:33:45 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 16 Feb 2022 10:33:45 +0100 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> Message-ID: Hi Sergey, So, does this really make sense? If you put the new 2 disks in node7 in a pool, that data won't be able to survive node7 failure. If you're trying to benchmark the disks, that wouldn't be a good test, because in a real deployment disk IO for only one VM would be worse (due to replication and network latencies). What IOPS are you getting in your 4K tests? You won't get near direct disk IOPS... Did you try with multiple parallel VMs? Aggregate 4K results should be much better :) Cheers El 16/2/22 a las 10:24, ?????? ??????? escribi?: > > Hi Eneko, > > 16.02.2022 11:58, Eneko Lacunza ?????: >> Hi Sergey, >> >> El 16/2/22 a las 9:52, ?????? ??????? escribi?: >>> >>> I have 7 node's PVE Cluster + Ceph storage >>> >>> In 7 node I add new 2 disks and want to make specific new osd pool >>> on Ceph. >>> >>> Is possible with new? disk create specific pool ? >> >> You are adding 2 additional disk in each node, right? > No, I add the new disk on node 7, not on each node of cluster. >> >> You can assign them to a new pool, creating custom crush rules. > > Yes this I know how is make new rules > > In one node I for test added 2 ssd disk and make the new rules| > | > > |ceph osd crush rule create-replicated replicated_ssd default host > ssd? and with this rule I? make new pool vm.ssd > | >> >> Why do you want to use those disks for a different pool? What disks >> do you have now, and what disk are the new? (for example, are all HDD >> or SSD...) > > I want make new pool with HDD - SAS for specific storage of some > Windows Server VM. > > In existing pools : > > vm.pool? base pool for VM disks > cephfs_data? some disks and ISO and other datas > vm.ssd?? new pool I make from 2 ssd disk > > I try to test the Windows Server disk speed for Read/Write and RND4K > Q32T1 with CrystalDiskMark 8.0.4x64 > > If I configure the VM disk to Sata and SSD emulation,Cache: Write back > and Discard, Speed write/read is very good? something like : > > SEQ1M Q8T1 1797.43/1713.07 > > SEQ1M Q1T1 1790.77/1350.55 > > but the RND4K Q32T1 and RND4K Q1T1 is not good, very small. > > After the test I think if I add 2 new disks,? configure is for > specific pool maybe my speed for? RND4K Q32T1 and RND4K Q1T1 maybe > they will get better > > Thank you > >> >> Cheers >> >> Eneko Lacunza >> Zuzendari teknikoa | Director t?cnico >> Binovo IT Human Project >> >> Tel. +34 943 569 206 |https://www.binovo.es >> Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun >> >> https://www.youtube.com/user/CANALBINOVO >> https://www.linkedin.com/company/37269706/ Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From tsabolov at t8.ru Wed Feb 16 10:54:38 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Wed, 16 Feb 2022 12:54:38 +0300 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> Message-ID: <1f2ec805-8fe2-159a-4f59-2831a3110b89@t8.ru> Hi Eneko, 16.02.2022 12:33, Eneko Lacunza ?????: > > Hi Sergey, > > So, does this really make sense? If you put the new 2 disks in node7 > in a pool, that data won't be able to survive node7 failure. You are right, if 7 node is failure data won't be able. But I think about if 2 disks/2 osd ad on new pool and is shared on all nodes. > > If you're trying to benchmark the disks, that wouldn't be a good test, > because in a real deployment disk IO for only one VM would be worse > (due to replication and network latencies). Not only for one VM, I have 2 and more in future? Windows VM > > What IOPS are you getting in your 4K tests? You won't get near direct > disk IOPS... I need to test the host disk or the VM disk ? > > Did you try with multiple parallel VMs? Aggregate 4K results should be > much better :) I think about this way, maybe is work. > > Cheers > > El 16/2/22 a las 10:24, ?????? ??????? escribi?: >> >> Hi Eneko, >> >> 16.02.2022 11:58, Eneko Lacunza ?????: >>> Hi Sergey, >>> >>> El 16/2/22 a las 9:52, ?????? ??????? escribi?: >>>> >>>> I have 7 node's PVE Cluster + Ceph storage >>>> >>>> In 7 node I add new 2 disks and want to make specific new osd pool >>>> on Ceph. >>>> >>>> Is possible with new? disk create specific pool ? >>> >>> You are adding 2 additional disk in each node, right? >> No, I add the new disk on node 7, not on each node of cluster. >>> >>> You can assign them to a new pool, creating custom crush rules. >> >> Yes this I know how is make new rules >> >> In one node I for test added 2 ssd disk and make the new rules| >> | >> >> |ceph osd crush rule create-replicated replicated_ssd default host >> ssd? and with this rule I? make new pool vm.ssd >> | >>> >>> Why do you want to use those disks for a different pool? What disks >>> do you have now, and what disk are the new? (for example, are all >>> HDD or SSD...) >> >> I want make new pool with HDD - SAS for specific storage of some >> Windows Server VM. >> >> In existing pools : >> >> vm.pool? base pool for VM disks >> cephfs_data? some disks and ISO and other datas >> vm.ssd?? new pool I make from 2 ssd disk >> >> I try to test the Windows Server disk speed for Read/Write and RND4K >> Q32T1 with CrystalDiskMark 8.0.4x64 >> >> If I configure the VM disk to Sata and SSD emulation,Cache: Write >> back and Discard, Speed write/read is very good something like : >> >> SEQ1M Q8T1 1797.43/1713.07 >> >> SEQ1M Q1T1 1790.77/1350.55 >> >> but the RND4K Q32T1 and RND4K Q1T1 is not good, very small. >> >> After the test I think if I add 2 new disks,? configure is for? >> specific pool maybe my speed for? RND4K Q32T1 and RND4K Q1T1 maybe >> they will get better >> >> Thank you >> >>> >>> Cheers >>> >>> Eneko Lacunza >>> Zuzendari teknikoa | Director t?cnico >>> Binovo IT Human Project >>> >>> Tel. +34 943 569 206 |https://www.binovo.es >>> Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun >>> >>> https://www.youtube.com/user/CANALBINOVO >>> https://www.linkedin.com/company/37269706/ > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Tel. +34 943 569 206 |https://www.binovo.es > Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun > > https://www.youtube.com/user/CANALBINOVO > https://www.linkedin.com/company/37269706/ Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Feb 16 10:59:28 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 16 Feb 2022 10:59:28 +0100 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <1f2ec805-8fe2-159a-4f59-2831a3110b89@t8.ru> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> <1f2ec805-8fe2-159a-4f59-2831a3110b89@t8.ru> Message-ID: <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es> Hi Sergey, El 16/2/22 a las 10:54, ?????? ??????? escribi?: > >> What IOPS are you getting in your 4K tests? You won't get near direct >> disk IOPS... > I need to test the host disk or the VM disk ? If you're worried about VM performance, then test VM disks... :) Cheers Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From elacunza at binovo.es Wed Feb 16 13:56:03 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 16 Feb 2022 13:56:03 +0100 Subject: PBS live restore lvm-thin/sparse issue Message-ID: <3d001ad3-9673-bb40-178c-e1a25941d516@binovo.es> Hi all, I'm preparing some scenarios for a PBS lab to be held tomorrow, and found something interesting. I have a brand-new installed Windows Server 2019 backed up in PBS, encrypted. This is a test VM, with only 40GB of disk. If I restore it to default local-lvm storage, everything is as expected and only about 5GB of actual space is used. If I live-restore it to the same storage, then a full 40GB of actual space used. It's like in live-restore mode sparse blocks aren't properly processed. Is this a known issue? I don't see any mention in the docs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#vzdump_restore PVE and PBS are latest ISO versions. Cheers EnekoLacunza CTO | Zuzendari teknikoa Binovo IT Human Project 943 569 206 elacunza at binovo.es binovo.es Astigarragako Bidea, 2 - 2 izda. Oficina 10-11, 20180 Oiartzun youtube linkedin From wolf at wolfspyre.com Wed Feb 16 18:14:59 2022 From: wolf at wolfspyre.com (Wolf Noble) Date: Wed, 16 Feb 2022 11:14:59 -0600 Subject: [PVE-User] Installing proxmox such that it boots from internal sdcard (dell r720)? .. root on sas drives In-Reply-To: <20220216093634.56554ae9@rosa.proxmox.com> References: <20220216093634.56554ae9@rosa.proxmox.com> Message-ID: <415E89E0-58A2-4400-8214-6B46E5A47E9F@wolfspyre.com> I love this community. Thank you Stoiko! i?ll play with this and write up my experience and post back. ???W [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] > On Feb 16, 2022, at 02:36, Stoiko Ivanov wrote: > > ?Hello, > >> On Sun, 13 Feb 2022 16:23:45 -0600 >> Wolf Noble wrote: >> >> Howdy all! >> >> is there a way to install proxmox such that the boot filesystem and bootloader is installed on the dual mirrored sd cards that are available on the dell 12g servers? > Has been a while since I dealt with those machines - and cannot verify > this here - but if you cannot boot from the disks in the internal bays you > could try the following: > * start the PVE installer in debug mode > * let the installer run (exit the first 2 debug shells, and install > regularly) > * after the installation is done you get another debug shell > * there re-import the zfs rpool, bind mount what's necessary and chroot > into the new system - see [0] for steps. > * inside format and init the sd-cards (if you want you can also create > partitions on them - assuming the sd-card presents itself to the OS as > /dev/sdX run: > ** proxmox-boot-tool format /dev/sdX > ** proxmox-boot-tool init /dev/sdX > * exit the chroot, umount, export rpool > * try booting from the sd-card > > but this is just from memory - have not tried this and am not sure if it > will work smoothly. > > Good luck, > stoiko > > > [0] > https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool#Repairing_a_System_Stuck_in_the_GRUB_Rescue_Shell > >> >> I don't want to have the entirety of the root filesystem on them (waaaaaaay too slow) but having /boot and the initial bootloader on an isolated media would make it much easier to have a zfs root.... as well as having the root fs on fast media that might not be a supportable boot option. >> >> >> I'n not seeing an obvious way to configure this, but that doesn't mean it's not there hiding (probably in plain sight and I'm blind) >> >> >> >> >> >> >> Wolf Noble >> Hoof & Paw >> wolf at wolfspyre.com >> >> [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> > > From gaio at lilliput.linux.it Fri Feb 18 18:55:18 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Fri, 18 Feb 2022 18:55:18 +0100 Subject: [PVE-User] 2 host cluster, direct link: best configuration? Message-ID: <4535ei-8j11.ln1@hermione.lilliput.linux.it> We have to setup a little 2 node cluster, with symmetrical nodes, using replication. For that, we have added a 10G interface, and setup a direct link between the two server. But how is better to setup the link? Some ideas: 1) another network, different from LAN pro: simple cons: replica happen only on cluster/corosync interface? So i have to setup corosync on direct link addresses? 2) ip override; eg LAN interfaces are 10.37.5.21/21 and 10.37.5.22/21; direct link interfaces are 10.37.5.21/32 and 10.37.5.22/32, and an explicitly route to other host pro: if direct link go down, we can simple tear down interfaces cons: no auto, interfaces have to tear down manually 3) setup a bridge around LAN and direct link cable, setup STP and leave the switches all the work. This config suggested, more or less, by: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server Thanks. -- Io chiedo quando sara` che l'uomo potra` imparare a vivere senza ammazzare e il vento si posera` (F. Guccini) From gianni.milo22 at gmail.com Sat Feb 19 20:00:43 2022 From: gianni.milo22 at gmail.com (GM) Date: Sat, 19 Feb 2022 19:00:43 +0000 Subject: [PVE-User] 2 host cluster, direct link: best configuration? In-Reply-To: <4535ei-8j11.ln1@hermione.lilliput.linux.it> References: <4535ei-8j11.ln1@hermione.lilliput.linux.it> Message-ID: I'd personally use option 1, dedicated nic(s) or vlans for LAN and Corosync network.Then I'd use the dedicated 10G link only for storage replication. All 3 networks would be setup on different network subnets. Storage replication uses "migration network" for the replication, so you could setup that on the dedicated 10G link, so that it does not interfere with the corosync/lan traffic. Migration network can be setup either via the datacenter.cfg or via the web gui (Datacenter -> Options -> Migration settings). On Sat, 19 Feb 2022 at 16:10, Marco Gaiarin wrote: > > We have to setup a little 2 node cluster, with symmetrical nodes, using > replication. > > For that, we have added a 10G interface, and setup a direct link between > the > two server. > > But how is better to setup the link? Some ideas: > > 1) another network, different from LAN > > pro: simple > cons: replica happen only on cluster/corosync interface? So i have to > setup corosync on direct link addresses? > > > 2) ip override; eg LAN interfaces are 10.37.5.21/21 and 10.37.5.22/21; > direct link interfaces are 10.37.5.21/32 and 10.37.5.22/32, and an > explicitly route to other host > > pro: if direct link go down, we can simple tear down interfaces > cons: no auto, interfaces have to tear down manually > > > 3) setup a bridge around LAN and direct link cable, setup STP and leave the > switches all the work. > > This config suggested, more or less, by: > https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server > > > Thanks. > > -- > Io chiedo quando sara` che l'uomo potra` imparare > a vivere senza ammazzare e il vento si posera` (F. Guccini) > > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From gaio at lilliput.linux.it Mon Feb 21 11:19:40 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Mon, 21 Feb 2022 11:19:40 +0100 Subject: [PVE-User] Old, 32bit, debian and clock drift. In-Reply-To: ; from SmartGate on Mon, Feb 21, 2022 at 18:36:02PM +0100 References: Message-ID: Mandi! Arjen via pve-user In chel di` si favelave... > Sorry, I must have missed that. Other people also encountered your problem: > https://v13.gr/2016/02/15/running-an-ntp-server-in-a-vm-using-kvm/ > Instead of disabling ntpd, do not use the kvm-clock source on the NTP server VM. > Either way, make sure not to combine the two. OK, i've give it a try, for now, manually changing the clocksource with: echo 'hpet' > /sys/devices/system/clocksource/clocksource0/current_clocksource But some things seems strange to me... 1) this happen on *TWO* VMs in two different installation; i've dozen of similar installation where: root at vdmsv1:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock and clock are perfectly in sync; true that these problematic VMs are pretty old debian, but not older than others that works as expected. 2) other docs, like: https://docs.oracle.com/en/database/oracle/oracle-database/21/ladbi/setting-clock-source-vm.html confirm that 'tsc' is the preferred clock source for VMs; but in these box that depicted the trouble, 'TSC' clock source get 'kicked off' because 'unreilable'. So, doing a little 'survey' on my VMs, seems that *all* VMs use by default 'kvm-clock' as the default clock source, but on those two VMs 'tsc' clock source is unreilable, and get kicked off, and 'kvm-clock' is unreilable too, but get not kicked off. Really, really strange... -- Chi parla male, pensa male e vive male. Bisogna trovare le parole giuste: le parole sono importanti! Nanni Moretti in Palombella Rossa From gaio at lilliput.linux.it Mon Feb 21 18:34:45 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Mon, 21 Feb 2022 18:34:45 +0100 Subject: [PVE-User] Old, 32bit, debian and clock drift. In-Reply-To: ; from SmartGate on Mon, Feb 21, 2022 at 18:36:02PM +0100 References: Message-ID: Mandi! Marco Gaiarin In chel di` si favelave... > OK, i've give it a try, for now, manually changing the clocksource with: > echo 'hpet' > /sys/devices/system/clocksource/clocksource0/current_clocksource No, also 'hpet' clocksource drift... -- Se non trovi nessuno vuol dire che siamo scappati alle sei-shell (bash, tcsh,csh...) (Possi) From gaio at lilliput.linux.it Mon Feb 21 13:14:57 2022 From: gaio at lilliput.linux.it (Marco Gaiarin) Date: Mon, 21 Feb 2022 13:14:57 +0100 Subject: [PVE-User] 2 host cluster, direct link: best configuration? In-Reply-To: ; from SmartGate on Mon, Feb 21, 2022 at 18:36:02PM +0100 References: <4535ei-8j11.ln1@hermione.lilliput.linux.it> Message-ID: Mandi! GM In chel di` si favelave... > I'd personally use option 1, dedicated nic(s) or vlans for LAN and Corosync > network.Then I'd use the dedicated 10G link only for storage replication. > All 3 networks would be setup on different network subnets. Storage > replication uses "migration network" for the replication, so you could > setup that on the dedicated 10G link, so that it does not interfere with > the corosync/lan traffic. Migration network can be setup either via the > datacenter.cfg or via the web gui (Datacenter -> Options -> Migration > settings). Cool! Thanks for the hint! -- Non sara` il canto delle sirene che ci innamorera` noi lo conosciamo bene, l'abbiamo sentito gia` (F. De Gregori) From lists at merit.unu.edu Tue Feb 22 13:03:06 2022 From: lists at merit.unu.edu (mj) Date: Tue, 22 Feb 2022 13:03:06 +0100 Subject: [PVE-User] 10G mesh to 40G DAC Message-ID: <639da1da-5e4c-7c13-03cb-ce716cd07954@merit.unu.edu> Hi, I just wanted to share my experience of upgrading our cluster from 10G mesh networking (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server) to arista 40G MLAG over DAC cables. I expected DAC over switches to have a slightly increased latency over direct (meshed) UTP RJ45 connections. Throughput would of course be much higher in the new config. Ceph (on SSDs) is used mainly for VM images, RBD. So, after having completed the migration yesterday, I can say that the VMs feel much quicker. Specially the VMs using databases (LAMP servers and a couple of management systems) are much more responsive now than they were before. Over all, we're very happy now. For the record, we're using dual arista DCS-7050QX-32S MLAG now. Purchased refurb for around 1000 euro each, including three years warranty. Just thought I'd share. :-) MJ From wolf at wolfspyre.com Tue Feb 22 17:35:51 2022 From: wolf at wolfspyre.com (Wolf Noble) Date: Tue, 22 Feb 2022 10:35:51 -0600 Subject: [PVE-User] 10G mesh to 40G DAC In-Reply-To: <639da1da-5e4c-7c13-03cb-ce716cd07954@merit.unu.edu> References: <639da1da-5e4c-7c13-03cb-ce716cd07954@merit.unu.edu> Message-ID: Cool!! is there some set of consistent synthetic action-sets cluster maintainers can perform to illustrate a vaguely useful performance benchmark that would facilitate some objective comparable metrics? would it be valuable in cases like this? [= The contents of this message have been written, read, processed, erased, sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, found, and most importantly delivered entirely with recycled electrons =] From elacunza at binovo.es Tue Feb 22 18:10:36 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 22 Feb 2022 18:10:36 +0100 Subject: [PVE-User] PBS live restore lvm-thin/sparse issue In-Reply-To: References: Message-ID: <2e14ccc5-2c4a-f31e-3529-3bf55ea3b905@binovo.es> I didn't receive any input on this. We normally don't use lvm-thin, should I file a issue? :) El 16/2/22 a las 13:56, Eneko Lacunza via pve-user escribi?: > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From luiscoralle at fi.uncoma.edu.ar Wed Feb 23 03:03:12 2022 From: luiscoralle at fi.uncoma.edu.ar (Luis G. Coralle) Date: Tue, 22 Feb 2022 23:03:12 -0300 Subject: [PVE-User] Problem with backup Message-ID: I have a 5 nodes cluster with proxmox pve 7.1-10. I have problem when I start a backup on a shared NFS backup storage, it shows the following message: Some errors have been encountered: > pve5: Parameter verification failed. (400) *next-run*: property is not defined in schema and the schema does not allow > additional properties > *schedule*: property is not defined in schema and the schema does not > allow additional properties > *type*: property is not defined in schema and the schema does not allow > additional prope Has it happen anyone? -- Luis G. Coralle Secretar?a de TIC Facultad de Inform?tica Universidad Nacional del Comahue (+54) 299-4490300 Int 647 From elacunza at binovo.es Wed Feb 23 11:41:29 2022 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 23 Feb 2022 11:41:29 +0100 Subject: [PVE-User] Problem with backup In-Reply-To: References: Message-ID: Try refreshing Web interface. All nodes are upgraded to 7.1? El 23/2/22 a las 3:03, Luis G. Coralle escribi?: > I have a 5 nodes cluster with proxmox pve 7.1-10. > I have problem when I start a backup on a shared NFS backup storage, it > shows the following message: > > > Some errors have been encountered: >> pve5: Parameter verification failed. (400) > > *next-run*: property is not defined in schema and the schema does not allow >> additional properties >> *schedule*: property is not defined in schema and the schema does not >> allow additional properties >> *type*: property is not defined in schema and the schema does not allow >> additional prope > > > > Has it happen anyone? > Eneko Lacunza Zuzendari teknikoa | Director t?cnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ From luiscoralle at fi.uncoma.edu.ar Thu Feb 24 03:45:59 2022 From: luiscoralle at fi.uncoma.edu.ar (Luis G. Coralle) Date: Wed, 23 Feb 2022 23:45:59 -0300 Subject: [PVE-User] Problem with backup In-Reply-To: References: Message-ID: Hi, the problem was that one VM had mounted a non-existent .iso image. El mi?, 23 feb 2022 a la(s) 07:42, Eneko Lacunza via pve-user ( pve-user at lists.proxmox.com) escribi?: > > > > ---------- Forwarded message ---------- > From: Eneko Lacunza > To: pve-user at lists.proxmox.com > Cc: > Bcc: > Date: Wed, 23 Feb 2022 11:41:29 +0100 > Subject: Re: [PVE-User] Problem with backup > Try refreshing Web interface. All nodes are upgraded to 7.1? > > El 23/2/22 a las 3:03, Luis G. Coralle escribi?: > > I have a 5 nodes cluster with proxmox pve 7.1-10. > > I have problem when I start a backup on a shared NFS backup storage, it > > shows the following message: > > > > > > Some errors have been encountered: > >> pve5: Parameter verification failed. (400) > > > > *next-run*: property is not defined in schema and the schema does not > allow > >> additional properties > >> *schedule*: property is not defined in schema and the schema does not > >> allow additional properties > >> *type*: property is not defined in schema and the schema does not allow > >> additional prope > > > > > > > > Has it happen anyone? > > > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Tel. +34 943 569 206 |https://www.binovo.es > Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun > > https://www.youtube.com/user/CANALBINOVO > https://www.linkedin.com/company/37269706/ > > > > ---------- Forwarded message ---------- > From: Eneko Lacunza via pve-user > To: pve-user at lists.proxmox.com > Cc: Eneko Lacunza > Bcc: > Date: Wed, 23 Feb 2022 11:41:29 +0100 > Subject: Re: [PVE-User] Problem with backup > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Luis G. Coralle Secretar?a de TIC Facultad de Inform?tica Universidad Nacional del Comahue (+54) 299-4490300 Int 647 From tsabolov at t8.ru Thu Feb 24 13:29:33 2022 From: tsabolov at t8.ru (=?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?=) Date: Thu, 24 Feb 2022 15:29:33 +0300 Subject: [PVE-User] New Disk on one node of Cluster. In-Reply-To: <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es> References: <10232bf8-ad79-e262-9861-bcc88f3c1bb9@t8.ru> <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es> <1f2ec805-8fe2-159a-4f59-2831a3110b89@t8.ru> <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es> Message-ID: Hi Eneko, I make some test and found if one node remove from Cluster and restore the VM on it the VM disk performance is very well! I have some ideas to test other methods about performance. My question is: If I add to all nodes 2 or 1 ssd disks and move the *Ceph journal to SSD* disks my performance of VM and Ceph it will be better for virtual machines and? With ceph journal on ssd ceph working better and fast? Have someone such an experience move the Ceph journal to ssd? And what ssd disk with GB is enough for journal? Thank you. 16.02.2022 12:59, Eneko Lacunza ?????: > Hi Sergey, > > El 16/2/22 a las 10:54, ?????? ??????? escribi?: >> >>> What IOPS are you getting in your 4K tests? You won't get near >>> direct disk IOPS... >> I need to test the host disk or the VM disk ? > > If you're worried about VM performance, then test VM disks... :) > > Cheers > > Eneko Lacunza > Zuzendari teknikoa | Director t?cnico > Binovo IT Human Project > > Tel. +34 943 569 206 |https://www.binovo.es > Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun > > https://www.youtube.com/user/CANALBINOVO > https://www.linkedin.com/company/37269706/ Sergey TS The best Regard _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user