From lindsay.mathieson at gmail.com Wed Jul 1 01:47:33 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Wed, 1 Jul 2020 09:47:33 +1000 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> Message-ID: <18c9e05f-11e2-05d6-fe62-03184f93a9a9@gmail.com> On 30/06/2020 11:09 pm, Mark Schouten wrote: > I think this is incorrect. Using KRBD uses the kernel-driver which is > usually older than the userland-version. Also, upgrading is easier when > not using KRBD. Older yes - Luminous (12.x) But it supports sufficient features and I found it considerably faster than the user driver. -- Lindsay From lindsay.mathieson at gmail.com Wed Jul 1 01:49:24 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Wed, 1 Jul 2020 09:49:24 +1000 Subject: [PVE-User] Ceph Bluestore - lvmcache versus WAL/DB on SSD In-Reply-To: <20200630131233.z3ezuatxnys6n637@shell.tuxis.net> References: <20200630131233.z3ezuatxnys6n637@shell.tuxis.net> Message-ID: On 30/06/2020 11:12 pm, Mark Schouten wrote: > Could be that (deep) scrubs are periodically killing your performance. > There are some tweaks available to make them less invading: > > osd_scrub_chunk_min=20 # 5 > osd_scrub_sleep=4 # 0 Thanks, I don't think its deep scrub - things seem ok when a scrub is running, but I will test it to be sure. Have also been meaning to restrict them to after hours. > > And then some: > https://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/ > > Best option is really to place spinning disks with SSD.. Would love to, alas, SMB budget restrictions. Wanting champagne performance on a beer budget :) -- Lindsay From jameslipski at protonmail.com Wed Jul 1 18:54:27 2020 From: jameslipski at protonmail.com (jameslipski) Date: Wed, 01 Jul 2020 16:54:27 +0000 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <18c9e05f-11e2-05d6-fe62-03184f93a9a9@gmail.com> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> <18c9e05f-11e2-05d6-fe62-03184f93a9a9@gmail.com> Message-ID: Thank you, there is definitely an improvement to using krbd -- not seeing any i/o waits. ??????? Original Message ??????? On Tuesday, June 30, 2020 7:47 PM, Lindsay Mathieson wrote: > On 30/06/2020 11:09 pm, Mark Schouten wrote: > > > I think this is incorrect. Using KRBD uses the kernel-driver which is > > usually older than the userland-version. Also, upgrading is easier when > > not using KRBD. > > Older yes - Luminous (12.x) > > But it supports sufficient features and I found it considerably faster > than the user driver. > > ---------------------------------------------------------------------------------------------------------------------------- > > Lindsay > > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lindsay.mathieson at gmail.com Thu Jul 2 00:56:07 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Thu, 2 Jul 2020 08:56:07 +1000 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> <18c9e05f-11e2-05d6-fe62-03184f93a9a9@gmail.com> Message-ID: <29a739c4-99e1-61f0-db2a-ad5e11116dd4@gmail.com> On 2/07/2020 2:54 am, jameslipski via pve-user wrote: > Thank you, there is definitely an improvement to using krbd -- not seeing any i/o waits. Excellent, glad to hear it. -- Lindsay From lindsay.mathieson at gmail.com Thu Jul 2 01:06:54 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Thu, 2 Jul 2020 09:06:54 +1000 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> Message-ID: <21a85fbb-a67c-ad98-f8d4-c02be09ef3e4@gmail.com> On 30/06/2020 11:09 pm, Mark Schouten wrote: > I think this is incorrect. Using KRBD uses the kernel-driver which is > usually older than the userland-version. Also, upgrading is easier when > not using KRBD. > > I'd like to hear that I'm wrong, am I?:) I did some adhoc testing last night - definitely a difference, in KRBD's favour. Both sequential and random IO was much better with it enabled. There's some interesting theads online regards librbd vs KRBD. Apparently since 12.x, librd should perform better, but a lot of people aren't seeing it. -- Lindsay From mark at tuxis.nl Thu Jul 2 15:15:20 2020 From: mark at tuxis.nl (Mark Schouten) Date: Thu, 2 Jul 2020 15:15:20 +0200 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <21a85fbb-a67c-ad98-f8d4-c02be09ef3e4@gmail.com> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> <21a85fbb-a67c-ad98-f8d4-c02be09ef3e4@gmail.com> Message-ID: <20200702131520.uf7huisga3sq6rzj@shell.tuxis.net> On Thu, Jul 02, 2020 at 09:06:54AM +1000, Lindsay Mathieson wrote: > I did some adhoc testing last night - definitely a difference, in KRBD's > favour. Both sequential and random IO was much better with it enabled. Interesting! I just did some testing too on our demo cluster. Ceph with 6 osd's over three nodes, size 2. root at node04:~# pveversion pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.41-1-pve) root at node04:~# ceph -v ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable) rbd create fio_test --size 10G -p Ceph rbd create map_test --size 10G -p Ceph rbd map Ceph/map_test When just using a write test (rw=randwrite) krbd wins, big. rbd: WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=10.0GiB (10.7GB), run=269904-269904msec krbd: WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=10.0GiB (10.7GB), run=49582-49582msec However, using rw=randrw (rwmixread=75), things change a lot: rbd: READ: bw=49.0MiB/s (52.4MB/s), 49.0MiB/s-49.0MiB/s (52.4MB/s-52.4MB/s), io=7678MiB (8051MB), run=153607-153607msec WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=2562MiB (2687MB), run=153607-153607msec krbd: READ: bw=5511KiB/s (5643kB/s), 5511KiB/s-5511KiB/s (5643kB/s-5643kB/s), io=7680MiB (8053MB), run=1426930-1426930msec WRITE: bw=1837KiB/s (1881kB/s), 1837KiB/s-1837KiB/s (1881kB/s-1881kB/s), io=2560MiB (2685MB), run=1426930-1426930msec Maybe I'm interpreting or testing stuff wrong, but it looks like simply writing to krbd is much faster, but actually trying to use that data seems slower. Let me know what you guys think. Attachments are being stripped, IIRC, so here's the config and the full output of the tests: ============RESULTS================== rbd_write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32 rbd_readwrite: (g=1): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32 krbd_write: (g=2): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 krbd_readwrite: (g=3): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.12 Starting 4 processes Jobs: 1 (f=1): [_(3),f(1)][100.0%][eta 00m:00s] rbd_write: (groupid=0, jobs=1): err= 0: pid=1846441: Thu Jul 2 15:08:42 2020 write: IOPS=9712, BW=37.9MiB/s (39.8MB/s)(10.0GiB/269904msec); 0 zone resets slat (nsec): min=943, max=1131.9k, avg=6367.94, stdev=10934.84 clat (usec): min=1045, max=259066, avg=3286.70, stdev=4553.24 lat (usec): min=1053, max=259069, avg=3293.06, stdev=4553.20 clat percentiles (usec): | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2573], | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3228], | 70.00th=[ 3425], 80.00th=[ 3621], 90.00th=[ 3982], 95.00th=[ 4359], | 99.00th=[ 5538], 99.50th=[ 6718], 99.90th=[ 82314], 99.95th=[125305], | 99.99th=[187696] bw ( KiB/s): min=17413, max=40282, per=83.81%, avg=32561.17, stdev=3777.39, samples=539 iops : min= 4353, max=10070, avg=8139.93, stdev=944.34, samples=539 lat (msec) : 2=2.64%, 4=87.80%, 10=9.37%, 20=0.08%, 50=0.01% lat (msec) : 100=0.02%, 250=0.09%, 500=0.01% cpu : usr=8.73%, sys=5.27%, ctx=1254152, majf=0, minf=8484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 rbd_readwrite: (groupid=1, jobs=1): err= 0: pid=1852029: Thu Jul 2 15:08:42 2020 read: IOPS=12.8k, BW=49.0MiB/s (52.4MB/s)(7678MiB/153607msec) slat (nsec): min=315, max=4467.8k, avg=3247.91, stdev=7360.28 clat (usec): min=276, max=160495, avg=1412.53, stdev=656.11 lat (usec): min=281, max=160497, avg=1415.78, stdev=656.02 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 693], 10.00th=[ 832], 20.00th=[ 1012], | 30.00th=[ 1139], 40.00th=[ 1254], 50.00th=[ 1352], 60.00th=[ 1450], | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2343], | 99.00th=[ 3621], 99.50th=[ 3949], 99.90th=[ 5604], 99.95th=[ 7373], | 99.99th=[11207] bw ( KiB/s): min=25546, max=50344, per=78.44%, avg=40147.73, stdev=2610.22, samples=306 iops : min= 6386, max=12586, avg=10036.57, stdev=652.54, samples=306 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(2562MiB/153607msec); 0 zone resets slat (nsec): min=990, max=555362, avg=5474.97, stdev=6241.91 clat (usec): min=1052, max=196165, avg=3239.08, stdev=3722.92 lat (usec): min=1056, max=196171, avg=3244.55, stdev=3722.91 clat percentiles (usec): | 1.00th=[ 1663], 5.00th=[ 1991], 10.00th=[ 2180], 20.00th=[ 2442], | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3130], | 70.00th=[ 3359], 80.00th=[ 3654], 90.00th=[ 4359], 95.00th=[ 5014], | 99.00th=[ 6325], 99.50th=[ 7177], 99.90th=[ 40109], 99.95th=[104334], | 99.99th=[175113] bw ( KiB/s): min= 8450, max=17786, per=78.45%, avg=13398.97, stdev=891.56, samples=306 iops : min= 2112, max= 4446, avg=3349.35, stdev=222.88, samples=306 lat (usec) : 500=0.79%, 750=4.30%, 1000=9.19% lat (msec) : 2=55.67%, 4=26.22%, 10=3.78%, 20=0.03%, 50=0.01% lat (msec) : 100=0.01%, 250=0.02% cpu : usr=13.97%, sys=7.94%, ctx=1729014, majf=0, minf=2214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=1965537,655903,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 krbd_write: (groupid=2, jobs=1): err= 0: pid=1855430: Thu Jul 2 15:08:42 2020 write: IOPS=52.9k, BW=207MiB/s (217MB/s)(10.0GiB/49582msec); 0 zone resets slat (nsec): min=1624, max=41411k, avg=17942.28, stdev=482539.15 clat (nsec): min=1495, max=41565k, avg=586889.90, stdev=2650654.87 lat (usec): min=3, max=41568, avg=604.90, stdev=2691.73 clat percentiles (usec): | 1.00th=[ 92], 5.00th=[ 93], 10.00th=[ 93], 20.00th=[ 94], | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 102], | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 139], 95.00th=[ 161], | 99.00th=[14877], 99.50th=[18482], 99.90th=[18744], 99.95th=[22676], | 99.99th=[22938] bw ( KiB/s): min=61770, max=1314960, per=94.28%, avg=199384.09, stdev=331335.15, samples=99 iops : min=15442, max=328740, avg=49845.71, stdev=82833.70, samples=99 lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=55.51% lat (usec) : 250=41.04%, 500=0.12%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 10=0.01%, 20=3.22%, 50=0.07% cpu : usr=6.29%, sys=11.90%, ctx=4350, majf=0, minf=12 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 krbd_readwrite: (groupid=3, jobs=1): err= 0: pid=1858884: Thu Jul 2 15:08:42 2020 read: IOPS=1377, BW=5511KiB/s (5643kB/s)(7680MiB/1426930msec) slat (usec): min=200, max=145771, avg=716.74, stdev=355.05 clat (usec): min=31, max=169171, avg=16877.60, stdev=2881.41 lat (usec): min=692, max=170099, avg=17594.97, stdev=2937.38 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12780], 10.00th=[13698], 20.00th=[14746], | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[17433], | 70.00th=[17957], 80.00th=[18744], 90.00th=[20055], 95.00th=[21103], | 99.00th=[25035], 99.50th=[28443], 99.90th=[34866], 99.95th=[39060], | 99.99th=[57410] bw ( KiB/s): min= 2312, max= 6776, per=99.99%, avg=5510.53, stdev=292.16, samples=2853 iops : min= 578, max= 1694, avg=1377.63, stdev=73.04, samples=2853 write: IOPS=459, BW=1837KiB/s (1881kB/s)(2560MiB/1426930msec); 0 zone resets slat (nsec): min=1731, max=131919, avg=8242.43, stdev=5170.50 clat (usec): min=4, max=169165, avg=16871.71, stdev=2885.14 lat (usec): min=22, max=169182, avg=16880.10, stdev=2885.33 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12780], 10.00th=[13698], 20.00th=[14746], | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[17433], | 70.00th=[17957], 80.00th=[18744], 90.00th=[20055], 95.00th=[21103], | 99.00th=[25297], 99.50th=[28181], 99.90th=[34866], 99.95th=[38536], | 99.99th=[58459] bw ( KiB/s): min= 696, max= 2368, per=100.00%, avg=1837.14, stdev=169.59, samples=2853 iops : min= 174, max= 592, avg=459.28, stdev=42.39, samples=2853 lat (usec) : 10=0.01%, 50=0.01%, 750=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.21%, 20=90.02%, 50=9.75% lat (msec) : 100=0.02%, 250=0.01% cpu : usr=1.34%, sys=3.68%, ctx=1966986, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=1966000,655440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=10.0GiB (10.7GB), run=269904-269904msec Run status group 1 (all jobs): READ: bw=49.0MiB/s (52.4MB/s), 49.0MiB/s-49.0MiB/s (52.4MB/s-52.4MB/s), io=7678MiB (8051MB), run=153607-153607msec WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=2562MiB (2687MB), run=153607-153607msec Run status group 2 (all jobs): WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=10.0GiB (10.7GB), run=49582-49582msec Run status group 3 (all jobs): READ: bw=5511KiB/s (5643kB/s), 5511KiB/s-5511KiB/s (5643kB/s-5643kB/s), io=7680MiB (8053MB), run=1426930-1426930msec WRITE: bw=1837KiB/s (1881kB/s), 1837KiB/s-1837KiB/s (1881kB/s-1881kB/s), io=2560MiB (2685MB), run=1426930-1426930msec Disk stats (read/write): rbd0: ios=1965643/893981, merge=0/2379481, ticks=1366950/16608305, in_queue=14637096, util=95.65% ============FIO CONFIG================== [global] invalidate=0 bs=4k iodepth=32 stonewall [rbd_write] ioengine=rbd clientname=admin pool=Ceph rbdname=fio_test rw=randwrite [rbd_readwrite] ioengine=rbd clientname=admin pool=Ceph rbdname=fio_test rw=randrw rwmixread=75 [krbd_write] ioengine=libaio filename=/dev/rbd0 rw=randwrite [krbd_readwrite] ioengine=libaio filename=/dev/rbd0 rw=randrw rwmixread=75 -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From aderumier at odiso.com Thu Jul 2 15:57:32 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 2 Jul 2020 15:57:32 +0200 (CEST) Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <20200702131520.uf7huisga3sq6rzj@shell.tuxis.net> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> <21a85fbb-a67c-ad98-f8d4-c02be09ef3e4@gmail.com> <20200702131520.uf7huisga3sq6rzj@shell.tuxis.net> Message-ID: <1349332156.660711.1593698252401.JavaMail.zimbra@odiso.com> Hi, you should give it a try to ceph octopus. librbd have greatly improved for write, and I can recommand to enable writeback now by default Here some iops result with 1vm - 1disk - 4k block iodepth=64, librbd, no iothread. nautilus-cache=none nautilus-cache=writeback octopus-cache=none octopus-cache=writeback randread 4k 62.1k 25.2k 61.1k 60.8k randwrite 4k 27.7k 19.5k 34.5k 53.0k seqwrite 4k 7850 37.5k 24.9k 82.6k ----- Mail original ----- De: "Mark Schouten" ?: "proxmoxve" Envoy?: Jeudi 2 Juillet 2020 15:15:20 Objet: Re: [PVE-User] High I/O waits, not sure if it's a ceph issue. On Thu, Jul 02, 2020 at 09:06:54AM +1000, Lindsay Mathieson wrote: > I did some adhoc testing last night - definitely a difference, in KRBD's > favour. Both sequential and random IO was much better with it enabled. Interesting! I just did some testing too on our demo cluster. Ceph with 6 osd's over three nodes, size 2. root at node04:~# pveversion pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.41-1-pve) root at node04:~# ceph -v ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable) rbd create fio_test --size 10G -p Ceph rbd create map_test --size 10G -p Ceph rbd map Ceph/map_test When just using a write test (rw=randwrite) krbd wins, big. rbd: WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=10.0GiB (10.7GB), run=269904-269904msec krbd: WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=10.0GiB (10.7GB), run=49582-49582msec However, using rw=randrw (rwmixread=75), things change a lot: rbd: READ: bw=49.0MiB/s (52.4MB/s), 49.0MiB/s-49.0MiB/s (52.4MB/s-52.4MB/s), io=7678MiB (8051MB), run=153607-153607msec WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=2562MiB (2687MB), run=153607-153607msec krbd: READ: bw=5511KiB/s (5643kB/s), 5511KiB/s-5511KiB/s (5643kB/s-5643kB/s), io=7680MiB (8053MB), run=1426930-1426930msec WRITE: bw=1837KiB/s (1881kB/s), 1837KiB/s-1837KiB/s (1881kB/s-1881kB/s), io=2560MiB (2685MB), run=1426930-1426930msec Maybe I'm interpreting or testing stuff wrong, but it looks like simply writing to krbd is much faster, but actually trying to use that data seems slower. Let me know what you guys think. Attachments are being stripped, IIRC, so here's the config and the full output of the tests: ============RESULTS================== rbd_write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32 rbd_readwrite: (g=1): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32 krbd_write: (g=2): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 krbd_readwrite: (g=3): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.12 Starting 4 processes Jobs: 1 (f=1): [_(3),f(1)][100.0%][eta 00m:00s] rbd_write: (groupid=0, jobs=1): err= 0: pid=1846441: Thu Jul 2 15:08:42 2020 write: IOPS=9712, BW=37.9MiB/s (39.8MB/s)(10.0GiB/269904msec); 0 zone resets slat (nsec): min=943, max=1131.9k, avg=6367.94, stdev=10934.84 clat (usec): min=1045, max=259066, avg=3286.70, stdev=4553.24 lat (usec): min=1053, max=259069, avg=3293.06, stdev=4553.20 clat percentiles (usec): | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2573], | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3228], | 70.00th=[ 3425], 80.00th=[ 3621], 90.00th=[ 3982], 95.00th=[ 4359], | 99.00th=[ 5538], 99.50th=[ 6718], 99.90th=[ 82314], 99.95th=[125305], | 99.99th=[187696] bw ( KiB/s): min=17413, max=40282, per=83.81%, avg=32561.17, stdev=3777.39, samples=539 iops : min= 4353, max=10070, avg=8139.93, stdev=944.34, samples=539 lat (msec) : 2=2.64%, 4=87.80%, 10=9.37%, 20=0.08%, 50=0.01% lat (msec) : 100=0.02%, 250=0.09%, 500=0.01% cpu : usr=8.73%, sys=5.27%, ctx=1254152, majf=0, minf=8484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 rbd_readwrite: (groupid=1, jobs=1): err= 0: pid=1852029: Thu Jul 2 15:08:42 2020 read: IOPS=12.8k, BW=49.0MiB/s (52.4MB/s)(7678MiB/153607msec) slat (nsec): min=315, max=4467.8k, avg=3247.91, stdev=7360.28 clat (usec): min=276, max=160495, avg=1412.53, stdev=656.11 lat (usec): min=281, max=160497, avg=1415.78, stdev=656.02 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 693], 10.00th=[ 832], 20.00th=[ 1012], | 30.00th=[ 1139], 40.00th=[ 1254], 50.00th=[ 1352], 60.00th=[ 1450], | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2343], | 99.00th=[ 3621], 99.50th=[ 3949], 99.90th=[ 5604], 99.95th=[ 7373], | 99.99th=[11207] bw ( KiB/s): min=25546, max=50344, per=78.44%, avg=40147.73, stdev=2610.22, samples=306 iops : min= 6386, max=12586, avg=10036.57, stdev=652.54, samples=306 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(2562MiB/153607msec); 0 zone resets slat (nsec): min=990, max=555362, avg=5474.97, stdev=6241.91 clat (usec): min=1052, max=196165, avg=3239.08, stdev=3722.92 lat (usec): min=1056, max=196171, avg=3244.55, stdev=3722.91 clat percentiles (usec): | 1.00th=[ 1663], 5.00th=[ 1991], 10.00th=[ 2180], 20.00th=[ 2442], | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3130], | 70.00th=[ 3359], 80.00th=[ 3654], 90.00th=[ 4359], 95.00th=[ 5014], | 99.00th=[ 6325], 99.50th=[ 7177], 99.90th=[ 40109], 99.95th=[104334], | 99.99th=[175113] bw ( KiB/s): min= 8450, max=17786, per=78.45%, avg=13398.97, stdev=891.56, samples=306 iops : min= 2112, max= 4446, avg=3349.35, stdev=222.88, samples=306 lat (usec) : 500=0.79%, 750=4.30%, 1000=9.19% lat (msec) : 2=55.67%, 4=26.22%, 10=3.78%, 20=0.03%, 50=0.01% lat (msec) : 100=0.01%, 250=0.02% cpu : usr=13.97%, sys=7.94%, ctx=1729014, majf=0, minf=2214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=1965537,655903,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 krbd_write: (groupid=2, jobs=1): err= 0: pid=1855430: Thu Jul 2 15:08:42 2020 write: IOPS=52.9k, BW=207MiB/s (217MB/s)(10.0GiB/49582msec); 0 zone resets slat (nsec): min=1624, max=41411k, avg=17942.28, stdev=482539.15 clat (nsec): min=1495, max=41565k, avg=586889.90, stdev=2650654.87 lat (usec): min=3, max=41568, avg=604.90, stdev=2691.73 clat percentiles (usec): | 1.00th=[ 92], 5.00th=[ 93], 10.00th=[ 93], 20.00th=[ 94], | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 102], | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 139], 95.00th=[ 161], | 99.00th=[14877], 99.50th=[18482], 99.90th=[18744], 99.95th=[22676], | 99.99th=[22938] bw ( KiB/s): min=61770, max=1314960, per=94.28%, avg=199384.09, stdev=331335.15, samples=99 iops : min=15442, max=328740, avg=49845.71, stdev=82833.70, samples=99 lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=55.51% lat (usec) : 250=41.04%, 500=0.12%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 10=0.01%, 20=3.22%, 50=0.07% cpu : usr=6.29%, sys=11.90%, ctx=4350, majf=0, minf=12 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 krbd_readwrite: (groupid=3, jobs=1): err= 0: pid=1858884: Thu Jul 2 15:08:42 2020 read: IOPS=1377, BW=5511KiB/s (5643kB/s)(7680MiB/1426930msec) slat (usec): min=200, max=145771, avg=716.74, stdev=355.05 clat (usec): min=31, max=169171, avg=16877.60, stdev=2881.41 lat (usec): min=692, max=170099, avg=17594.97, stdev=2937.38 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12780], 10.00th=[13698], 20.00th=[14746], | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[17433], | 70.00th=[17957], 80.00th=[18744], 90.00th=[20055], 95.00th=[21103], | 99.00th=[25035], 99.50th=[28443], 99.90th=[34866], 99.95th=[39060], | 99.99th=[57410] bw ( KiB/s): min= 2312, max= 6776, per=99.99%, avg=5510.53, stdev=292.16, samples=2853 iops : min= 578, max= 1694, avg=1377.63, stdev=73.04, samples=2853 write: IOPS=459, BW=1837KiB/s (1881kB/s)(2560MiB/1426930msec); 0 zone resets slat (nsec): min=1731, max=131919, avg=8242.43, stdev=5170.50 clat (usec): min=4, max=169165, avg=16871.71, stdev=2885.14 lat (usec): min=22, max=169182, avg=16880.10, stdev=2885.33 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12780], 10.00th=[13698], 20.00th=[14746], | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[17433], | 70.00th=[17957], 80.00th=[18744], 90.00th=[20055], 95.00th=[21103], | 99.00th=[25297], 99.50th=[28181], 99.90th=[34866], 99.95th=[38536], | 99.99th=[58459] bw ( KiB/s): min= 696, max= 2368, per=100.00%, avg=1837.14, stdev=169.59, samples=2853 iops : min= 174, max= 592, avg=459.28, stdev=42.39, samples=2853 lat (usec) : 10=0.01%, 50=0.01%, 750=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.21%, 20=90.02%, 50=9.75% lat (msec) : 100=0.02%, 250=0.01% cpu : usr=1.34%, sys=3.68%, ctx=1966986, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=1966000,655440,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=10.0GiB (10.7GB), run=269904-269904msec Run status group 1 (all jobs): READ: bw=49.0MiB/s (52.4MB/s), 49.0MiB/s-49.0MiB/s (52.4MB/s-52.4MB/s), io=7678MiB (8051MB), run=153607-153607msec WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=2562MiB (2687MB), run=153607-153607msec Run status group 2 (all jobs): WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=10.0GiB (10.7GB), run=49582-49582msec Run status group 3 (all jobs): READ: bw=5511KiB/s (5643kB/s), 5511KiB/s-5511KiB/s (5643kB/s-5643kB/s), io=7680MiB (8053MB), run=1426930-1426930msec WRITE: bw=1837KiB/s (1881kB/s), 1837KiB/s-1837KiB/s (1881kB/s-1881kB/s), io=2560MiB (2685MB), run=1426930-1426930msec Disk stats (read/write): rbd0: ios=1965643/893981, merge=0/2379481, ticks=1366950/16608305, in_queue=14637096, util=95.65% ============FIO CONFIG================== [global] invalidate=0 bs=4k iodepth=32 stonewall [rbd_write] ioengine=rbd clientname=admin pool=Ceph rbdname=fio_test rw=randwrite [rbd_readwrite] ioengine=rbd clientname=admin pool=Ceph rbdname=fio_test rw=randrw rwmixread=75 [krbd_write] ioengine=libaio filename=/dev/rbd0 rw=randwrite [krbd_readwrite] ioengine=libaio filename=/dev/rbd0 rw=randrw rwmixread=75 -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Thu Jul 2 20:00:55 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 2 Jul 2020 15:00:55 -0300 Subject: [PVE-User] LVM-thin from one server to other seems show wrong size Message-ID: Hi there I have two servers in cluster, but using lvm-thin (aka local-lvm). I have named the servers as pve01 and pve02. I have created a vm with 150GB of disk space in pve02. In pve02 lvs show me that LSize was 150GB and the Data% was show up about 15%. Bu then I have migrated the vm from pve02 to pve01 and now, when I list the lvm with lvs show me that: lvs | grep 154 vm-154-disk-0 pve Vwi-aotz-- 140.00g data 100.00 Why Data% show 100? I don't get it! Thanks for any advice. --- Gilberto Nunes Ferreira From lindsay.mathieson at gmail.com Fri Jul 3 05:09:43 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 3 Jul 2020 13:09:43 +1000 Subject: [PVE-User] Ceph Octopus Message-ID: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> Any plans/schedule for updating the PVE version of Ceph to Octopus (15.x)? Just curious. Though I gather librd has seen a number of performance enhancements. -- Lindsay From devin at pabstatencio.com Fri Jul 3 07:29:45 2020 From: devin at pabstatencio.com (Devin A) Date: Fri, 3 Jul 2020 00:29:45 -0500 Subject: [PVE-User] Ceph Octopus In-Reply-To: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> References: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> Message-ID: Yes they are working on the new version. Will be released sometime soon I suspect. On July 2, 2020 at 8:10:56 PM, Lindsay Mathieson ( lindsay.mathieson at gmail.com) wrote: Any plans/schedule for updating the PVE version of Ceph to Octopus (15.x)? Just curious. Though I gather librd has seen a number of performance br/>enhancements. < -- br/>Lindsay < _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lindsay.mathieson at gmail.com Fri Jul 3 07:38:55 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 3 Jul 2020 15:38:55 +1000 Subject: [PVE-User] Ceph Octopus In-Reply-To: References: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> Message-ID: On 3/07/2020 3:29 pm, Devin A wrote: > Yes they are working on the new version. Will be released sometime soon I > suspect. Ta -- Lindsay From aderumier at odiso.com Fri Jul 3 08:42:49 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Fri, 3 Jul 2020 08:42:49 +0200 (CEST) Subject: [PVE-User] Ceph Octopus In-Reply-To: References: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> Message-ID: <604887075.676493.1593758569218.JavaMail.zimbra@odiso.com> octopus test repo is available here [ http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ | http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ ] I'm already using for a customer, the proxmox management && gui is already patched to handle it. Alexandre Derumier Ing?nieur syst?me et stockage Manager Infrastructure Fixe : +33 3 59 82 20 10 125 Avenue de la r?publique 59110 La Madeleine [ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [ https://www.linkedin.com/company/odiso ] [ https://www.viadeo.com/fr/company/odiso ] [ https://www.facebook.com/monsiteestlent ] [ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog d?di? ? la webperformance et la gestion de pics de trafic De: "Lindsay Mathieson" ?: "proxmoxve" Envoy?: Vendredi 3 Juillet 2020 07:38:55 Objet: Re: [PVE-User] Ceph Octopus On 3/07/2020 3:29 pm, Devin A wrote: > Yes they are working on the new version. Will be released sometime soon I > suspect. Ta -- Lindsay _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lindsay.mathieson at gmail.com Fri Jul 3 12:10:57 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 3 Jul 2020 20:10:57 +1000 Subject: [PVE-User] Ceph Octopus In-Reply-To: <604887075.676493.1593758569218.JavaMail.zimbra@odiso.com> References: <7ce094ae-3fe8-f4a9-025d-57ab5066d3b3@gmail.com> <604887075.676493.1593758569218.JavaMail.zimbra@odiso.com> Message-ID: On 3/07/2020 4:42 pm, Alexandre DERUMIER wrote: > octopus test repo is available here > > [http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ |http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ ] > > > I'm already using for a customer, the proxmox management && gui is already patched to handle it. Awww man, don't tempt me... Only just migrated our storage to Ceph and its such a tweakers temptation, already killed the cluster twice :) I should not be updating to a test repo :) -- Lindsay From gilberto.nunes32 at gmail.com Fri Jul 3 14:19:19 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Fri, 3 Jul 2020 09:19:19 -0300 Subject: [PVE-User] LVM-thin from one server to other seems show wrong size In-Reply-To: References: Message-ID: Any body? --- Gilberto Nunes Ferreira Em qui., 2 de jul. de 2020 ?s 15:00, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Hi there > > I have two servers in cluster, but using lvm-thin (aka local-lvm). > I have named the servers as pve01 and pve02. > I have created a vm with 150GB of disk space in pve02. > In pve02 lvs show me that LSize was 150GB and the Data% was show up about > 15%. > Bu then I have migrated the vm from pve02 to pve01 and now, when I list > the lvm with lvs show me that: > lvs | grep 154 > vm-154-disk-0 pve Vwi-aotz-- 140.00g data 100.00 > Why Data% show 100? > I don't get it! > Thanks for any advice. > > --- > Gilberto Nunes Ferreira > > From mark at tuxis.nl Fri Jul 3 14:41:37 2020 From: mark at tuxis.nl (Mark Schouten) Date: Fri, 3 Jul 2020 14:41:37 +0200 Subject: [PVE-User] LVM-thin from one server to other seems show wrong size In-Reply-To: References: Message-ID: <20200703124137.lp7ligmvaezqxk5c@shell.tuxis.net> On Fri, Jul 03, 2020 at 09:19:19AM -0300, Gilberto Nunes wrote: > Any body? After migrating, the disk is no longer thin-provisioned. Enable discard, mount linux vm's with the discard option or run fstrim. On windows, check the optimize button in defrag. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From mark at tuxis.nl Fri Jul 3 14:42:24 2020 From: mark at tuxis.nl (Mark Schouten) Date: Fri, 3 Jul 2020 14:42:24 +0200 Subject: [PVE-User] High I/O waits, not sure if it's a ceph issue. In-Reply-To: <1349332156.660711.1593698252401.JavaMail.zimbra@odiso.com> References: <20200630130912.qia6rghud5okmnsp@shell.tuxis.net> <21a85fbb-a67c-ad98-f8d4-c02be09ef3e4@gmail.com> <20200702131520.uf7huisga3sq6rzj@shell.tuxis.net> <1349332156.660711.1593698252401.JavaMail.zimbra@odiso.com> Message-ID: <20200703124224.uwgad3rxhzfniqgj@shell.tuxis.net> On Thu, Jul 02, 2020 at 03:57:32PM +0200, Alexandre DERUMIER wrote: > Hi, > > you should give it a try to ceph octopus. librbd have greatly improved for write, and I can recommand to enable writeback now by default Yes, that wil probably even work better. But what I'm trying to determine here, is if krbd is a better choice than librbd :) -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From gilberto.nunes32 at gmail.com Fri Jul 3 14:58:20 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Fri, 3 Jul 2020 09:58:20 -0300 Subject: [PVE-User] LVM-thin from one server to other seems show wrong size In-Reply-To: <20200703124137.lp7ligmvaezqxk5c@shell.tuxis.net> References: <20200703124137.lp7ligmvaezqxk5c@shell.tuxis.net> Message-ID: Hi! Thanks a lot... I will try it and then report later... --- Gilberto Nunes Ferreira Em sex., 3 de jul. de 2020 ?s 09:43, Mark Schouten escreveu: > On Fri, Jul 03, 2020 at 09:19:19AM -0300, Gilberto Nunes wrote: > > Any body? > > After migrating, the disk is no longer thin-provisioned. Enable discard, > mount linux vm's with the discard option or run fstrim. On windows, > check the optimize button in defrag. > > -- > Mark Schouten | Tuxis B.V. > KvK: 74698818 | http://www.tuxis.nl/ > T: +31 318 200208 | info at tuxis.nl > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From martin at proxmox.com Mon Jul 6 15:47:51 2020 From: martin at proxmox.com (Martin Maurer) Date: Mon, 6 Jul 2020 15:47:51 +0200 Subject: [PVE-User] mailing list server maintenance Message-ID: <6fee5c61-62c3-1ef5-99c6-4591306b577b@proxmox.com> Hi all, We will move all mailing lists to an new host. Maintenance will take place on 7th of July 2020 (tomorrow). We try to minimize downtime, no emails will be lost. Thanks for your understanding. -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com From martin at proxmox.com Tue Jul 7 12:40:43 2020 From: martin at proxmox.com (Martin Maurer) Date: Tue, 7 Jul 2020 12:40:43 +0200 Subject: [PVE-User] mailing list server maintenance In-Reply-To: <6fee5c61-62c3-1ef5-99c6-4591306b577b@proxmox.com> References: <6fee5c61-62c3-1ef5-99c6-4591306b577b@proxmox.com> Message-ID: <3c2e192e-8535-ddb5-65fd-e19aa264e0f9@proxmox.com> Hi all, We just finished the migration, you can find the new mailman server at: https://lists.proxmox.com Please update your mail-clients and git-configurations to use the new address: pve-user at lists.proxmox.com On 7/6/20 3:47 PM, Martin Maurer wrote: > Hi all, > > We will move all mailing lists to an new host. > > Maintenance will take place on 7th of July 2020 (tomorrow). > We try to minimize downtime, no emails will be lost. > > Thanks for your understanding. > -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com From piccardi at truelite.it Tue Jul 7 17:48:13 2020 From: piccardi at truelite.it (Simone Piccardi) Date: Tue, 7 Jul 2020 17:48:13 +0200 Subject: qmrestore stopped working for filename restrictions Message-ID: Hi, I have a problem with qmrestore not working on some dump filenames, it worked some time ago, but with the last version: # pveversion pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.44-1-pve) I got: # qmrestore vzdump-qemu-fuss-server-10.0-latest.vma.lzo 110 -unique -storage local-lvm ERROR: couldn't determine archive info from '/var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo' I found that the problem is that the name do not follow an expected naming scheme (renaming the file as vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo works fine). It work also if I make /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo a simbolic link to vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo, and I find this quite strange). Anyway that error message is at least misleading: if the problem is the filename not having a right name, just tell this: from the message wording at first I thinked the file was corrupted. But then I do not undertstand why this restriction suddenly come up, and what's the problem of restoring a VM from a file having a more descriptive name. That one I'm restoring is a template image I'm distributing, I'd like to avoid names like vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo. I tried also to overcame the restriction using the standard input as the source but I got a different error: # cat /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo| qmrestore - 110 -unique -storage local-lvm restore vma archive: vma extract -v -r /var/tmp/vzdumptmp31864.fifo - /var/tmp/vzdumptmp31864 command 'set -o pipefail && vma extract -v -r /var/tmp/vzdumptmp31864.fifo - /var/tmp/vzdumptmp31864' failed: got timeout Is there a way to restore a file dump avoiding to rename it? Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From piccardi at truelite.it Wed Jul 8 10:53:28 2020 From: piccardi at truelite.it (Simone Piccardi) Date: Wed, 8 Jul 2020 10:53:28 +0200 Subject: qmrestore stopped working for filename restrictions Message-ID: Hi, I have a problem with qmrestore not working on some dump filenames, it worked some time ago, but with the last version: # pveversion pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.44-1-pve) I got: # qmrestore vzdump-qemu-fuss-server-10.0-latest.vma.lzo 110 -unique -storage local-lvm ERROR: couldn't determine archive info from '/var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo' I found that the problem is that the name do not follow an expected naming scheme (renaming the file as vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo works fine). It work also if I make /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo a simbolic link to vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo, and I find this quite strange). Anyway that error message is at least misleading: if the problem is the filename not having a right name, just tell this: from the message wording at first I thinked the file was corrupted. But then I do not undertstand why this restriction suddenly come up, and what's the problem of restoring a VM from a file having a more descriptive name. That one I'm restoring is a template image I'm distributing, I'd like to avoid names like vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo. I tried also to overcame the restriction using the standard input as the source but I got a different error: # cat /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo| qmrestore - 110 -unique -storage local-lvm restore vma archive: vma extract -v -r /var/tmp/vzdumptmp31864.fifo - /var/tmp/vzdumptmp31864 command 'set -o pipefail && vma extract -v -r /var/tmp/vzdumptmp31864.fifo - /var/tmp/vzdumptmp31864' failed: got timeout Is there a way to restore a file dump avoiding to rename it? Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From f.gruenbichler at proxmox.com Wed Jul 8 11:47:46 2020 From: f.gruenbichler at proxmox.com (=?UTF-8?Q?Fabian_Gr=C3=BCnbichler?=) Date: Wed, 8 Jul 2020 11:47:46 +0200 (CEST) Subject: [PVE-User] qmrestore stopped working for filename restrictions In-Reply-To: References: Message-ID: <190302424.156.1594201666202@webmail.proxmox.com> > Simone Piccardi via pve-user hat am 08.07.2020 10:53 geschrieben: > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > Hi, > > I have a problem with qmrestore not working on some dump filenames, it > worked some time ago, but with the last version: > > # pveversion > pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.44-1-pve) > > > I got: > > # qmrestore vzdump-qemu-fuss-server-10.0-latest.vma.lzo 110 -unique > -storage local-lvm > ERROR: couldn't determine archive info from > '/var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo' > > I found that the problem is that the name do not follow an expected > naming scheme (renaming the file as > vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo works fine). It work also if > I make /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo a > simbolic link to vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo, and I find > this quite strange). > > Anyway that error message is at least misleading: if the problem is the > filename not having a right name, just tell this: from the message > wording at first I thinked the file was corrupted. how is it misleading? we tried to determine the archive info (backup time, format, compression, backed-up guest ID and type) from the file name, and were not able to.. > > But then I do not undertstand why this restriction suddenly come up, and > what's the problem of restoring a VM from a file having a more > descriptive name. the code was refactored to allow re-use of the same logic in more places, was initially to strict, got relaxed, but not as far as your use case ;) > > That one I'm restoring is a template image I'm distributing, I'd like to > avoid names like vzdump-qemu-000-0000_00_00-00_00_00.vma.lzo. > > I tried also to overcame the restriction using the standard input as the > source but I got a different error: > > # cat /var/lib/vz/dump/vzdump-qemu-fuss-server-10.0-latest.vma.lzo| > qmrestore - 110 -unique -storage local-lvm > restore vma archive: vma extract -v -r /var/tmp/vzdumptmp31864.fifo - > /var/tmp/vzdumptmp31864 > command 'set -o pipefail && vma extract -v -r > /var/tmp/vzdumptmp31864.fifo - /var/tmp/vzdumptmp31864' failed: got timeout in that case you need to pipe in the extracted vma, not the compressed one. > > Is there a way to restore a file dump avoiding to rename it? yes and no. currently you need at least the prefix 'vzdump-qemu-\d+-', e.g. 'vzdump-qemu-0' if you want to use a VMID that is not usable in general and thus not colliding. I see no reason why this could not be relaxed further to allow the full 'vzdump-qemu-*.FILEXTENSION' though, so I'll send a patch shortly to do just that. From piccardi at truelite.it Wed Jul 8 12:45:23 2020 From: piccardi at truelite.it (Simone Piccardi) Date: Wed, 8 Jul 2020 12:45:23 +0200 Subject: [PVE-User] qmrestore stopped working for filename restrictions In-Reply-To: <190302424.156.1594201666202@webmail.proxmox.com> References: <190302424.156.1594201666202@webmail.proxmox.com> Message-ID: Il 08/07/20 11:47, Fabian Gr?nbichler ha scritto: > how is it misleading? we tried to determine the archive info (backup time, format, compression, backed-up guest ID and type) from the file name, and were not able to.. > It's misleading because for a well established Unix practice I expect that informations on the file content are inside the file itself and not in the name. So I do expect that if a command fails because of the file name (like gunzip does) it tell me explicetely. There is no documentation on qmrestore man page that there is a naming convention tha you must respect on the archive name. Nor that you need to use an uncompressed file when using standard input. Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From mark at tuxis.nl Wed Jul 8 14:32:42 2020 From: mark at tuxis.nl (Mark Schouten) Date: Wed, 8 Jul 2020 14:32:42 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: References: Message-ID: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> Hi all, We're still having this issue. I'm trying to find the place where the timestamp used for verification is coming from, but I can't figure it out. Obviously the issue originated from the incorrect time while installing, but that is fixed now: root at node01:~# timedatectl status Local time: Wed 2020-07-08 14:31:42 CEST Universal time: Wed 2020-07-08 12:31:42 UTC RTC time: Wed 2020-07-08 12:31:42 Time zone: Europe/Amsterdam (CEST, +0200) System clock synchronized: yes NTP service: active RTC in local TZ: no root at node02:~# timedatectl status Local time: Wed 2020-07-08 14:31:43 CEST Universal time: Wed 2020-07-08 12:31:43 UTC RTC time: Wed 2020-07-08 12:31:43 Time zone: Europe/Amsterdam (CEST, +0200) System clock synchronized: yes NTP service: active RTC in local TZ: no root at node03:~# timedatectl status Local time: Wed 2020-07-08 14:31:43 CEST Universal time: Wed 2020-07-08 12:31:43 UTC RTC time: Wed 2020-07-08 12:31:43 Time zone: Europe/Amsterdam (CEST, +0200) System clock synchronized: yes NTP service: active RTC in local TZ: no Please advise.. On Wed, Apr 01, 2020 at 02:41:24PM +0200, Richard Hopman wrote: > > Looking on some input on this: > 3 node cluster running pve 6.1-7, users logged into the webgui get logged out after 5 minutes. > systemd-timesyncd is running on all 3 machines, time is in sync. > /var/log/pveproxy/access.log is just reporting a 401 at the time of auto logoff > /var/log/syslog not reporting any issues > > Time was probably off on one node when i installed the cluster as 1 node > had a certificate becoming valid in the future. Based on this i replaced all node certificates > > after creating a new /etc/pve/pve-root-ca. > > Any help in this matter is greatly appreciated. > -- > BR, > > Richard > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From elacunza at binovo.es Wed Jul 8 14:48:23 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 8 Jul 2020 14:48:23 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> References: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> Message-ID: <16e565a6-48ca-cb9d-9e37-109ce7cd17dd@binovo.es> Hi, We ehad this issue and a cache clear and/or letting some hours go fixed it. El 8/7/20 a las 14:32, Mark Schouten escribi?: > Hi all, > > We're still having this issue. I'm trying to find the place where the > timestamp used for verification is coming from, but I can't figure it > out. > > Obviously the issue originated from the incorrect time while installing, > but that is fixed now: > > root at node01:~# timedatectl status > Local time: Wed 2020-07-08 14:31:42 CEST > Universal time: Wed 2020-07-08 12:31:42 UTC > RTC time: Wed 2020-07-08 12:31:42 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > root at node02:~# timedatectl status > Local time: Wed 2020-07-08 14:31:43 CEST > Universal time: Wed 2020-07-08 12:31:43 UTC > RTC time: Wed 2020-07-08 12:31:43 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > root at node03:~# timedatectl status > Local time: Wed 2020-07-08 14:31:43 CEST > Universal time: Wed 2020-07-08 12:31:43 UTC > RTC time: Wed 2020-07-08 12:31:43 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > > Please advise.. > > > > > On Wed, Apr 01, 2020 at 02:41:24PM +0200, Richard Hopman wrote: >> Looking on some input on this: >> 3 node cluster running pve 6.1-7, users logged into the webgui get logged out after 5 minutes. >> systemd-timesyncd is running on all 3 machines, time is in sync. >> /var/log/pveproxy/access.log is just reporting a 401 at the time of auto logoff >> /var/log/syslog not reporting any issues >> >> Time was probably off on one node when i installed the cluster as 1 node >> had a certificate becoming valid in the future. Based on this i replaced all node certificates >> >> after creating a new /etc/pve/pve-root-ca. >> >> Any help in this matter is greatly appreciated. >> -- >> BR, >> >> Richard >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From f.gruenbichler at proxmox.com Wed Jul 8 15:04:58 2020 From: f.gruenbichler at proxmox.com (=?UTF-8?Q?Fabian_Gr=C3=BCnbichler?=) Date: Wed, 8 Jul 2020 15:04:58 +0200 (CEST) Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> References: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> Message-ID: <421671497.187.1594213498496@webmail.proxmox.com> > Mark Schouten hat am 08.07.2020 14:32 geschrieben: > > > Hi all, > > We're still having this issue. I'm trying to find the place where the > timestamp used for verification is coming from, but I can't figure it > out. > > Obviously the issue originated from the incorrect time while installing, > but that is fixed now: > > root at node01:~# timedatectl status > Local time: Wed 2020-07-08 14:31:42 CEST > Universal time: Wed 2020-07-08 12:31:42 UTC > RTC time: Wed 2020-07-08 12:31:42 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > root at node02:~# timedatectl status > Local time: Wed 2020-07-08 14:31:43 CEST > Universal time: Wed 2020-07-08 12:31:43 UTC > RTC time: Wed 2020-07-08 12:31:43 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > root at node03:~# timedatectl status > Local time: Wed 2020-07-08 14:31:43 CEST > Universal time: Wed 2020-07-08 12:31:43 UTC > RTC time: Wed 2020-07-08 12:31:43 > Time zone: Europe/Amsterdam (CEST, +0200) > System clock synchronized: yes > NTP service: active > RTC in local TZ: no > > Please advise.. either wait, or touch the authkey files, or remove them. fixed in git already: https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=9de25de807b6843c971f1863e811cf76ee4c9b23 ;) From mark at tuxis.nl Wed Jul 8 15:48:07 2020 From: mark at tuxis.nl (Mark Schouten) Date: Wed, 8 Jul 2020 15:48:07 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: <421671497.187.1594213498496@webmail.proxmox.com> References: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> <421671497.187.1594213498496@webmail.proxmox.com> Message-ID: <20200708134807.nspd27qvjpwbp2a3@shell.tuxis.net> On Wed, Jul 08, 2020 at 03:04:58PM +0200, Fabian Gr?nbichler wrote: > either wait, or touch the authkey files, or remove them. You mean: 61 1 -rw------- 1 root www-data 1679 Dec 10 2020 /etc/pve/priv/authkey.key 28 1 -rw-r----- 1 root www-data 451 Dec 10 2020 /etc/pve/authkey.pub ? > fixed in git already: https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=9de25de807b6843c971f1863e811cf76ee4c9b23 > > ;) That's just cheating :) -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From f.gruenbichler at proxmox.com Thu Jul 9 09:42:01 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Thu, 09 Jul 2020 09:42:01 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: <20200708134807.nspd27qvjpwbp2a3@shell.tuxis.net> References: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> <421671497.187.1594213498496@webmail.proxmox.com> <20200708134807.nspd27qvjpwbp2a3@shell.tuxis.net> Message-ID: <1594280437.q87xlrcc4w.astroid@nora.none> On July 8, 2020 3:48 pm, Mark Schouten wrote: > On Wed, Jul 08, 2020 at 03:04:58PM +0200, Fabian Gr?nbichler wrote: >> either wait, or touch the authkey files, or remove them. > > You mean: > 61 1 -rw------- 1 root www-data 1679 Dec 10 2020 /etc/pve/priv/authkey.key > 28 1 -rw-r----- 1 root www-data 451 Dec 10 2020 /etc/pve/authkey.pub > > ? yes :) >> fixed in git already: https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=9de25de807b6843c971f1863e811cf76ee4c9b23 >> >> ;) > > That's just cheating :) Probably it fixed itself in the meantime anyway - unless your clock was really really off into the future ;) From mark at tuxis.nl Thu Jul 9 11:25:41 2020 From: mark at tuxis.nl (Mark Schouten) Date: Thu, 9 Jul 2020 11:25:41 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) In-Reply-To: <1594280437.q87xlrcc4w.astroid@nora.none> References: <20200708123242.u6mzoa4yyb5qnwem@shell.tuxis.net> <421671497.187.1594213498496@webmail.proxmox.com> <20200708134807.nspd27qvjpwbp2a3@shell.tuxis.net> <1594280437.q87xlrcc4w.astroid@nora.none> Message-ID: <20200709092541.6tm465zcgi5dsrun@shell.tuxis.net> On Thu, Jul 09, 2020 at 09:42:01AM +0200, Fabian Gr?nbichler wrote: > On July 8, 2020 3:48 pm, Mark Schouten wrote: > Probably it fixed itself in the meantime anyway - unless your clock was > really really off into the future ;) Dec 10, 2020. So we're not there yet. But I touched the files and now the customer is very happy. Thanks! -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From martin at proxmox.com Fri Jul 10 12:56:46 2020 From: martin at proxmox.com (Martin Maurer) Date: Fri, 10 Jul 2020 12:56:46 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) Message-ID: We are proud to announce the first beta release of our new Proxmox Backup Server. It's an enterprise-class client-server backup software that backups virtual machines, containers, and physical hosts. It is specially optimized for the Proxmox Virtual Environment platform and allows you to backup and replicate your data securely. It provides easy management with a command line and web-based user interface, and is licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). Proxmox Backup Server supports incremental backups, deduplication, compression and authenticated encryption. Using Rust https://www.rust-lang.org/ as implementation language guarantees high performance, low resource usage, and a safe, high quality code base. It features strong encryption done on the client side. Thus, it?s possible to backup data to not fully trusted targets. Main Features Support for Proxmox VE: The Proxmox Virtual Environment is fully supported and you can easily backup virtual machines (supporting QEMU dirty bitmaps - https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. Performance: The whole software stack is written in Rust https://www.rust-lang.org/, to provide high speed and memory efficiency. Deduplication: Periodic backups produce large amounts of duplicate data. The deduplication layer avoids redundancy and minimizes the used storage space. Incremental backups: Changes between backups are typically low. Reading and sending only the delta reduces storage and network impact of backups. Data Integrity: The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum algorithm assures the accuracy and consistency of your backups. Remote Sync: It is possible to efficiently synchronize data to remote sites. Only deltas containing new data are transferred. Compression: The ultra fast Zstandard compression is able to compress several gigabytes of data per second. Encryption: Backups can be encrypted on the client-side using AES-256 in Galois/Counter Mode (GCM https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated encryption mode provides very high performance on modern hardware. Web interface: Manage Proxmox backups with the integrated web-based user interface. Open Source: No secrets. Proxmox Backup Server is free and open-source software. The source code is licensed under AGPL, v3. Support: Enterprise support will be available from Proxmox. And of course - Backups can be restored! Release notes https://pbs.proxmox.com/wiki/index.php/Roadmap Download https://www.proxmox.com/downloads Alternate ISO download: http://download.proxmox.com/iso Documentation https://pbs.proxmox.com Community Forum https://forum.proxmox.com Source Code https://git.proxmox.com Bugtracker https://bugzilla.proxmox.com FAQ Q: How does this integrate into Proxmox VE? A: Just add your Proxmox Backup Server storage as new storage backup target to your Proxmox VE. Make sure that you have at least pve-manager 6.2-9 installed. Q: What will happen with the existing Proxmox VE backup (vzdump)? A: You can still use vzdump. The new backup is an additional but very powerful way to backup and restore your VMs and container. Q: Can I already backup my other Debian servers (file backup agent)? A: Yes, just install the Proxmox Backup Client (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). Q: Are there already backup agents for other distributions? A: Not packaged yet, but using a statically linked binary should work in most cases on modern Linux OS (work in progress). Q: Is there any recommended server hardware for the Proxmox Backup Server? A: Use enterprise class server hardware with enough disks for the (big) ZFS pool holding your backup data. The Backup Server should be in the same datacenter as your Proxmox VE hosts. Q: Where can I get more information about coming feature updates? A: Follow the announcement forum, pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news. Please help us reaching the final release date by testing this beta and by providing feedback via https://forum.proxmox.com -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com From elacunza at binovo.es Fri Jul 10 13:37:29 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 10 Jul 2020 13:37:29 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: Hi Martin, Congratulations on this all-new release to the team! In a fast overview, it looks like a well-thought solution with sound technical choices like Rust usage. I'm eager to give it a try, as it could help us converge our current VM and file-level backups. Given we usually perform PVE backups to a NFS server (in a PVE cluster node or standalone NAS), do you think it would make sense to setup PBS in a VM, with storage on a NFS server? Thanks a lot for your continued open source developments!! Cheers Eneko El 10/7/20 a las 12:56, Martin Maurer escribi?: > We are proud to announce the first beta release of our new Proxmox > Backup Server. > > It's an enterprise-class client-server backup software that backups > virtual machines, containers, and physical hosts. It is specially > optimized for the Proxmox Virtual Environment platform and allows you > to backup and replicate your data securely. It provides easy > management with a command line and web-based user interface, and is > licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). > > Proxmox Backup Server supports incremental backups, deduplication, > compression and authenticated encryption. Using Rust > https://www.rust-lang.org/ as implementation language guarantees high > performance, low resource usage, and a safe, high quality code base. > It features strong encryption done on the client side. Thus, it?s > possible to backup data to not fully trusted targets. > > Main Features > > Support for Proxmox VE: > The Proxmox Virtual Environment is fully supported and you can easily > backup virtual machines (supporting QEMU dirty bitmaps - > https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. > > Performance: > The whole software stack is written in Rust > https://www.rust-lang.org/, to provide high speed and memory efficiency. > > Deduplication: > Periodic backups produce large amounts of duplicate data. The > deduplication layer avoids redundancy and minimizes the used storage > space. > > Incremental backups: > Changes between backups are typically low. Reading and sending only > the delta reduces storage and network impact of backups. > > Data Integrity: > The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum > algorithm assures the accuracy and consistency of your backups. > > Remote Sync: > It is possible to efficiently synchronize data to remote sites. Only > deltas containing new data are transferred. > > Compression: > The ultra fast Zstandard compression is able to compress several > gigabytes of data per second. > > Encryption: > Backups can be encrypted on the client-side using AES-256 in > Galois/Counter Mode (GCM > https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated > encryption mode provides very high performance on modern hardware. > > Web interface: > Manage Proxmox backups with the integrated web-based user interface. > > Open Source: > No secrets. Proxmox Backup Server is free and open-source software. > The source code is licensed under AGPL, v3. > > Support: > Enterprise support will be available from Proxmox. > > And of course - Backups can be restored! > > Release notes > https://pbs.proxmox.com/wiki/index.php/Roadmap > > Download > https://www.proxmox.com/downloads > Alternate ISO download: > http://download.proxmox.com/iso > > Documentation > https://pbs.proxmox.com > > Community Forum > https://forum.proxmox.com > > Source Code > https://git.proxmox.com > > Bugtracker > https://bugzilla.proxmox.com > > FAQ > Q: How does this integrate into Proxmox VE? > A: Just add your Proxmox Backup Server storage as new storage backup > target to your Proxmox VE. Make sure that you have at least > pve-manager 6.2-9 installed. > > Q: What will happen with the existing Proxmox VE backup (vzdump)? > A: You can still use vzdump. The new backup is an additional but very > powerful way to backup and restore your VMs and container. > > Q: Can I already backup my other Debian servers (file backup agent)? > A: Yes, just install the Proxmox Backup Client > (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). > > Q: Are there already backup agents for other distributions? > A: Not packaged yet, but using a statically linked binary should work > in most cases on modern Linux OS (work in progress). > > Q: Is there any recommended server hardware for the Proxmox Backup > Server? > A: Use enterprise class server hardware with enough disks for the > (big) ZFS pool holding your backup data. The Backup Server should be > in the same datacenter as your Proxmox VE hosts. > > Q: Where can I get more information about coming feature updates? > A: Follow the announcement forum, pbs-devel mailing list > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and > subscribe to our newsletter https://www.proxmox.com/news. > > Please help us reaching the final release date by testing this beta > and by providing feedback via https://forum.proxmox.com > -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From dietmar at proxmox.com Fri Jul 10 13:45:13 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 13:45:13 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <1723924288.458.1594381514035@webmail.proxmox.com> > Given we usually perform PVE backups to a NFS server (in a PVE cluster > node or standalone NAS), do you think it would make sense to setup PBS > in a VM, with storage on a NFS server? I guess you can do that, but you may not get maximal performance this way. Especially if you put the data on NFS, you can end up sending everything twice over the network ... From devzero at web.de Fri Jul 10 13:42:05 2020 From: devzero at web.de (Roland) Date: Fri, 10 Jul 2020 13:42:05 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> great to hear! :) one technical/performance question - will delta backup be i/o efficient ( like VMWare cbt ) ? regards roland Am 10.07.20 um 12:56 schrieb Martin Maurer: > We are proud to announce the first beta release of our new Proxmox > Backup Server. > > It's an enterprise-class client-server backup software that backups > virtual machines, containers, and physical hosts. It is specially > optimized for the Proxmox Virtual Environment platform and allows you > to backup and replicate your data securely. It provides easy > management with a command line and web-based user interface, and is > licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). > > Proxmox Backup Server supports incremental backups, deduplication, > compression and authenticated encryption. Using Rust > https://www.rust-lang.org/ as implementation language guarantees high > performance, low resource usage, and a safe, high quality code base. > It features strong encryption done on the client side. Thus, it?s > possible to backup data to not fully trusted targets. > > Main Features > > Support for Proxmox VE: > The Proxmox Virtual Environment is fully supported and you can easily > backup virtual machines (supporting QEMU dirty bitmaps - > https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. > > Performance: > The whole software stack is written in Rust > https://www.rust-lang.org/, to provide high speed and memory efficiency. > > Deduplication: > Periodic backups produce large amounts of duplicate data. The > deduplication layer avoids redundancy and minimizes the used storage > space. > > Incremental backups: > Changes between backups are typically low. Reading and sending only > the delta reduces storage and network impact of backups. > > Data Integrity: > The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum > algorithm assures the accuracy and consistency of your backups. > > Remote Sync: > It is possible to efficiently synchronize data to remote sites. Only > deltas containing new data are transferred. > > Compression: > The ultra fast Zstandard compression is able to compress several > gigabytes of data per second. > > Encryption: > Backups can be encrypted on the client-side using AES-256 in > Galois/Counter Mode (GCM > https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated > encryption mode provides very high performance on modern hardware. > > Web interface: > Manage Proxmox backups with the integrated web-based user interface. > > Open Source: > No secrets. Proxmox Backup Server is free and open-source software. > The source code is licensed under AGPL, v3. > > Support: > Enterprise support will be available from Proxmox. > > And of course - Backups can be restored! > > Release notes > https://pbs.proxmox.com/wiki/index.php/Roadmap > > Download > https://www.proxmox.com/downloads > Alternate ISO download: > http://download.proxmox.com/iso > > Documentation > https://pbs.proxmox.com > > Community Forum > https://forum.proxmox.com > > Source Code > https://git.proxmox.com > > Bugtracker > https://bugzilla.proxmox.com > > FAQ > Q: How does this integrate into Proxmox VE? > A: Just add your Proxmox Backup Server storage as new storage backup > target to your Proxmox VE. Make sure that you have at least > pve-manager 6.2-9 installed. > > Q: What will happen with the existing Proxmox VE backup (vzdump)? > A: You can still use vzdump. The new backup is an additional but very > powerful way to backup and restore your VMs and container. > > Q: Can I already backup my other Debian servers (file backup agent)? > A: Yes, just install the Proxmox Backup Client > (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). > > Q: Are there already backup agents for other distributions? > A: Not packaged yet, but using a statically linked binary should work > in most cases on modern Linux OS (work in progress). > > Q: Is there any recommended server hardware for the Proxmox Backup > Server? > A: Use enterprise class server hardware with enough disks for the > (big) ZFS pool holding your backup data. The Backup Server should be > in the same datacenter as your Proxmox VE hosts. > > Q: Where can I get more information about coming feature updates? > A: Follow the announcement forum, pbs-devel mailing list > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and > subscribe to our newsletter https://www.proxmox.com/news. > > Please help us reaching the final release date by testing this beta > and by providing feedback via https://forum.proxmox.com > From elacunza at binovo.es Fri Jul 10 13:56:03 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 10 Jul 2020 13:56:03 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1723924288.458.1594381514035@webmail.proxmox.com> References: <1723924288.458.1594381514035@webmail.proxmox.com> Message-ID: Hi Dietmar, El 10/7/20 a las 13:45, Dietmar Maurer escribi?: > >> Given we usually perform PVE backups to a NFS server (in a PVE cluster >> node or standalone NAS), do you think it would make sense to setup PBS >> in a VM, with storage on a NFS server? > I guess you can do that, but you may not get maximal performance this way. > > Especially if you put the data on NFS, you can end up sending > everything twice over the network ... > That's a good point to consider, although reusing existing infrastructure and/or not needing a fourth server could outweight it for small clusters, specially considering the bandwith savings due to incremental VM backups versus current PVE full backups. Thanks a lot Eneko -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From lindsay.mathieson at gmail.com Fri Jul 10 14:03:31 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 10 Jul 2020 22:03:31 +1000 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> On 10/07/2020 8:56 pm, Martin Maurer wrote: > We are proud to announce the first beta release of our new Proxmox > Backup Server. Oh excellent, the backup system really needed some love and this looks interesting. Since I have no life I'll be testing this on a VM tonight :) Before I get into it - does the backup server support copying the backups to a external device such as a USB Drive so I can rotate backups offsite? -- Lindsay From dietmar at proxmox.com Fri Jul 10 14:09:40 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 14:09:40 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> Message-ID: <1935605059.460.1594382980972@webmail.proxmox.com> > one technical/performance question - will delta backup be i/o efficient > ( like VMWare cbt ) ? yes From dietmar at proxmox.com Fri Jul 10 14:13:23 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 14:13:23 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> Message-ID: <1768587204.461.1594383204281@webmail.proxmox.com> > Before I get into it - does the backup server support copying the > backups to a external device such as a USB Drive so I can rotate backups > offsite? I guess you can simply use rsync to copy the datastore to the usb stick. From devzero at web.de Fri Jul 10 14:24:17 2020 From: devzero at web.de (Roland) Date: Fri, 10 Jul 2020 14:24:17 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1935605059.460.1594382980972@webmail.proxmox.com> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> Message-ID: <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> fantastic! :) but - how does it work ? Am 10.07.20 um 14:09 schrieb Dietmar Maurer: >> one technical/performance question - will delta backup be i/o efficient >> ( like VMWare cbt ) ? > yes > From iztok.gregori at elettra.eu Fri Jul 10 14:45:28 2020 From: iztok.gregori at elettra.eu (Iztok Gregori) Date: Fri, 10 Jul 2020 14:45:28 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> On 10/07/20 12:56, Martin Maurer wrote: > We are proud to announce the first beta release of our new Proxmox > Backup Server. > Great to hear! Are you planning to support also CEPH (or other distributed file systems) as destination storage backend? Iztok Gregori From dietmar at proxmox.com Fri Jul 10 15:41:26 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 15:41:26 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> Message-ID: <521875662.472.1594388487491@webmail.proxmox.com> > Are you planning to support also CEPH (or other distributed file > systems) as destination storage backend? It is already possible to put the datastore a a mounted cephfs, or anything you can mount on the host. But this means that you copy data over the network multiple times, so this is not the best option performance wise... From t.lamprecht at proxmox.com Fri Jul 10 15:43:36 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 10 Jul 2020 15:43:36 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> Message-ID: <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> On 10.07.20 14:24, Roland wrote: > fantastic! :) > > but - how does it work ? It uses a content addressable storage to save the data chunks. Effectively, the same data chunk doesn't uses additional storage if saved more than once. From dietmar at proxmox.com Fri Jul 10 15:44:12 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 15:44:12 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> Message-ID: <386512653.473.1594388653259@webmail.proxmox.com> > fantastic! :) > > but - how does it work ? see: https://pbs.proxmox.com/wiki/index.php/Main_Page From t.lamprecht at proxmox.com Fri Jul 10 15:50:55 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 10 Jul 2020 15:50:55 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <1723924288.458.1594381514035@webmail.proxmox.com> Message-ID: Hi Eneko, On 10.07.20 13:56, Eneko Lacunza wrote: > El 10/7/20 a las 13:45, Dietmar Maurer escribi?: >>> Given we usually perform PVE backups to a NFS server (in a PVE cluster >>> node or standalone NAS), do you think it would make sense to setup PBS >>> in a VM, with storage on a NFS server? >> I guess you can do that, but you may not get maximal performance this way. >> >> Especially if you put the data on NFS, you can end up sending >> everything twice over the network ... >> > That's a good point to consider, although reusing existing infrastructure and/or not needing a fourth server could outweight it for small clusters, specially considering the bandwith savings due to incremental VM backups versus current PVE full backups. > Note that it also supports remote sync, and that backups can be encrypted by the client, this opens a few possibilities. One could be having local "hyper-converged" backup servers in each small cluster, and one central (or depending on safety concerns, two) big server to have a off-site copy of all the data in the case a cluster one fails. As data can be encrypted by the client the backup server doesn't have to be fully trusted. And as remote sync schedules are done efficiently (only the delta) one could have a remote over the WAN. This won't be the primary recommended setup, as a big (enough) local server as primary backup is always faster and better than a hyper-converged one, but should work for situations where one is limited by local available HW. cheers, Thomas From piccardi at truelite.it Fri Jul 10 16:01:18 2020 From: piccardi at truelite.it (Simone Piccardi) Date: Fri, 10 Jul 2020 16:01:18 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: Il 10/07/20 12:56, Martin Maurer ha scritto: > We are proud to announce the first beta release of our new Proxmox > Backup Server. > Thanks for the effort, that's very interesting. Two question: 1. Having two indipendend Proxmox server can install it on both, to do a cross backup? 2. There is a stress on ZFS support on the kernel and in the documentation there is a chapter on managing it, it's not clear to me if this is needed just for better performance or I can use it also just using an installation having just LVM Regards Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From devzero at web.de Fri Jul 10 16:06:35 2020 From: devzero at web.de (Roland) Date: Fri, 10 Jul 2020 16:06:35 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> Message-ID: <883f66d0-cee9-fa5b-e914-fb8cc63bb5f9@web.de> i think there may be a misunderstanding here or i was not clear enough to express what i meant. i guess in terms of backup storage,? pbs is doing similar to what borgbackup does - so indeed that IS i/o and storage effient , but that refers to the backup target side. but what about the backup source? I was referring to VMware cbt as that is a means of avoiding I/O on the VM storage, i.e. the backup source. afaik, proxmox/kvm does not (yet) have something like that !? I you have lot's of terabytes of VM disks, each incremental backup run will hog the VMs storage (the same like full backup). In VMware, this is adressed with "changed block tracking", as a backup agent can determine which blocks of a VMs disks have changed between incremental backups, so it won't need to scan through the whole VMs disks on each differential/incremental backup run. see: https://kb.vmware.com/s/article/1020128 https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100 i don't want to criticize proxmox, i think proxmox is fantastic, i just want to know what we get ( and what we don't get). regards roland Am 10.07.20 um 15:43 schrieb Thomas Lamprecht: > On 10.07.20 14:24, Roland wrote: >> fantastic! :) >> >> but - how does it work ? > It uses a content addressable storage to save the data chunks. > Effectively, the same data chunk doesn't uses additional storage if saved more than once. > From t.lamprecht at proxmox.com Fri Jul 10 16:15:02 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 10 Jul 2020 16:15:02 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <883f66d0-cee9-fa5b-e914-fb8cc63bb5f9@web.de> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> <883f66d0-cee9-fa5b-e914-fb8cc63bb5f9@web.de> Message-ID: <8433ecf2-0a9e-6a2f-2c5e-08a2d6967190@proxmox.com> On 10.07.20 16:06, Roland wrote: > i think there may be a misunderstanding here or i was not clear enough > to express what i meant. > > i guess in terms of backup storage,? pbs is doing similar to what > borgbackup does - so indeed that IS i/o and storage effient , but that > refers to the backup target side. > > but what about the backup source? > > I was referring to VMware cbt as that is a means of avoiding I/O on the > VM storage, i.e. the backup source. > > afaik, proxmox/kvm does not (yet) have something like that !? Proxmox Backup Server and Proxmox VE supports tracking what changed with dirty-bitmaps, this avoids reading anything from the storage and sending anything over the network that has not changed. > > I you have lot's of terabytes of VM disks, each incremental backup run > will hog the VMs storage (the same like full backup). > > In VMware, this is adressed with "changed block tracking", as a backup > agent can determine which blocks of a VMs disks have changed between > incremental backups, so it won't need to scan through the whole VMs > disks on each differential/incremental backup run. see above, we effectively support both - deduplication to reduce target storage impact and incremental backups to reduce source storage and network impact. https://pbs.proxmox.com/docs/introduction.html#main-features > > see: > https://kb.vmware.com/s/article/1020128 > https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100 > > i don't want to criticize proxmox, i think proxmox is fantastic, i just > want to know what we get ( and what we don't get). > No worries, no offense taken ;) cheers, Thomas From t.lamprecht at proxmox.com Fri Jul 10 16:23:51 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 10 Jul 2020 16:23:51 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: On 10.07.20 16:01, Simone Piccardi wrote: > Il 10/07/20 12:56, Martin Maurer ha scritto: >> We are proud to announce the first beta release of our new Proxmox Backup Server. >> > > > Thanks for the effort, that's very interesting. > Two question: > > 1. Having two indipendend Proxmox server can install it on both, to do a cross backup? You can add remotes to Proxmox Backup servers which can be synced efficiently and also automatically with a set schedule. And, you can also use it as target for multiple seprate Proxmox VE clusters, albeit some optimizations are still planned here: https://pbs.proxmox.com/wiki/index.php/Roadmap > 2. There is a stress on ZFS support on the kernel and in the documentation there is a chapter on managing it, it's not clear to me if this is needed just for better performance or I can use it also just using an installation having just LVM > Effectively you can use whatever is supported on the system where Proxmox Backup Server is installed, it needs to be a filesystem. The web-interface of PBS supports creating an ext4 or XFS backed datastore besides ZFS also. We recommend ZFS mainly because it has built-in support to get some redundancy easily and can work with really huge datasets (hundreds of TB), so this makes it ideal for a future proof Backup Server where hundreds to thousand of hosts backup too. If you're rather happy with another filesystem as backing datastore you can naturally use it :) cheers, Thomas From devzero at web.de Fri Jul 10 16:46:08 2020 From: devzero at web.de (Roland) Date: Fri, 10 Jul 2020 16:46:08 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <8433ecf2-0a9e-6a2f-2c5e-08a2d6967190@proxmox.com> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> <883f66d0-cee9-fa5b-e914-fb8cc63bb5f9@web.de> <8433ecf2-0a9e-6a2f-2c5e-08a2d6967190@proxmox.com> Message-ID: <57e9a9be-0907-d0c8-4ff7-8015c1585584@web.de> wo this is great to hear, thanks ! Am 10.07.20 um 16:15 schrieb Thomas Lamprecht: > On 10.07.20 16:06, Roland wrote: >> i think there may be a misunderstanding here or i was not clear enough >> to express what i meant. >> >> i guess in terms of backup storage,? pbs is doing similar to what >> borgbackup does - so indeed that IS i/o and storage effient , but that >> refers to the backup target side. >> >> but what about the backup source? >> >> I was referring to VMware cbt as that is a means of avoiding I/O on the >> VM storage, i.e. the backup source. >> >> afaik, proxmox/kvm does not (yet) have something like that !? > Proxmox Backup Server and Proxmox VE supports tracking what changed with > dirty-bitmaps, this avoids reading anything from the storage and sending > anything over the network that has not changed. > >> I you have lot's of terabytes of VM disks, each incremental backup run >> will hog the VMs storage (the same like full backup). >> >> In VMware, this is adressed with "changed block tracking", as a backup >> agent can determine which blocks of a VMs disks have changed between >> incremental backups, so it won't need to scan through the whole VMs >> disks on each differential/incremental backup run. > see above, we effectively support both - deduplication to reduce target > storage impact and incremental backups to reduce source storage and > network impact. > > https://pbs.proxmox.com/docs/introduction.html#main-features > >> see: >> https://kb.vmware.com/s/article/1020128 >> https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100 >> >> i don't want to criticize proxmox, i think proxmox is fantastic, i just >> want to know what we get ( and what we don't get). >> > No worries, no offense taken ;) > > cheers, > Thomas > > From iztok.gregori at elettra.eu Fri Jul 10 17:20:46 2020 From: iztok.gregori at elettra.eu (Iztok Gregori) Date: Fri, 10 Jul 2020 17:20:46 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <521875662.472.1594388487491@webmail.proxmox.com> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> Message-ID: <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> On 10/07/20 15:41, Dietmar Maurer wrote: >> Are you planning to support also CEPH (or other distributed file >> systems) as destination storage backend? > > It is already possible to put the datastore a a mounted cephfs, or > anything you can mount on the host. Is this "mount" managed by PBS or you have to "manually" mount it outside PBS? > > But this means that you copy data over the network multiple times, > so this is not the best option performance wise... True, PBS will act as a gateway to the backing storage cluster, but the data will be only re-routed to the final destination (in this case and OSD) not copied over (putting aside the CEPH replication policy). So performance wise you are limited by the bandwidth of the PBS network interfaces (as you will be for a local network storage server) and to the speed of the backing CEPH cluster. Maybe you will loose something on raw performance (but depending on the CEPH cluster you could gain also something) but you will gain the ability of "easily" expandable storage space and no single point of failure. Thanks a lot for your work! Iztok Gregori From dietmar at proxmox.com Fri Jul 10 17:31:26 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 17:31:26 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> Message-ID: <39487713.486.1594395087264@webmail.proxmox.com> > On 10/07/20 15:41, Dietmar Maurer wrote: > >> Are you planning to support also CEPH (or other distributed file > >> systems) as destination storage backend? > > > > It is already possible to put the datastore a a mounted cephfs, or > > anything you can mount on the host. > > Is this "mount" managed by PBS or you have to "manually" mount it > outside PBS? Not sure what kind of management you need for that? Usually people mount filesystems using /etc/fstab or by creating systemd mount units. > > But this means that you copy data over the network multiple times, > > so this is not the best option performance wise... > > True, PBS will act as a gateway to the backing storage cluster, but the > data will be only re-routed to the final destination (in this case and > OSD) not copied over (putting aside the CEPH replication policy). That is probably a very simplistic view of things. It involves copying data multiple times, so I will affect performance by sure. Note: We take about huge amounts of data. > So > performance wise you are limited by the bandwidth of the PBS network > interfaces (as you will be for a local network storage server) and to > the speed of the backing CEPH cluster. Maybe you will loose something on > raw performance (but depending on the CEPH cluster you could gain also > something) but you will gain the ability of "easily" expandable storage > space and no single point of failure. Sure, that's true. Would be interesting to get some performance stats for such setup... From dietmar at proxmox.com Fri Jul 10 17:41:53 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 17:41:53 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1768587204.461.1594383204281@webmail.proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> Message-ID: <450971473.487.1594395714078@webmail.proxmox.com> > On 07/10/2020 2:13 PM Dietmar Maurer wrote: > > > > Before I get into it - does the backup server support copying the > > backups to a external device such as a USB Drive so I can rotate backups > > offsite? > > I guess you can simply use rsync to copy the datastore to the usb stick. Also, we already have plans to add tape support, so we may support USB drives as backup media when we implement that. But that is work for the futures ... From danielb at numberall.com Fri Jul 10 17:40:50 2020 From: danielb at numberall.com (Daniel Bayerdorffer) Date: Fri, 10 Jul 2020 11:40:50 -0400 (EDT) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <1177981668.12454.1594395650653.JavaMail.zimbra@numberall.com> Hello Proxmox, Thank you for creating this new product. It looks great. It's also good timing as I'm getting ready to revamp our backup strategy. I was planning on doing something like described below, and I'm wondering if PBS can do this. We have one Hypervisor (we are very small) I have two ZFS storage pools. dpool- for running VM's bpool- for backing up our VM's both are SAS attached HDD's. Right now I use pve-zsync to backup the VM's to bpool on a 15 minute, daily, weekly, and monthly basis. I WANT to send the weekly snapshots to an offsite pool. I was going to use zfs send to do this. Can PBS backup to our local zfs pool, and then sync to the remote server. If so, does it use zfs send? Finally, can I somehow move the snapshots that pve-zsync is currently creating to PBS? Versus destroying them and starting over again? Thanks, Daniel ----- Original Message ----- From: "Martin Maurer" To: "PVE User List" , "pve-devel" , pbs-devel at lists.proxmox.com Sent: Friday, July 10, 2020 6:56:46 AM Subject: [PVE-User] Proxmox Backup Server (beta) We are proud to announce the first beta release of our new Proxmox Backup Server. It's an enterprise-class client-server backup software that backups virtual machines, containers, and physical hosts. It is specially optimized for the Proxmox Virtual Environment platform and allows you to backup and replicate your data securely. It provides easy management with a command line and web-based user interface, and is licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). Proxmox Backup Server supports incremental backups, deduplication, compression and authenticated encryption. Using Rust https://www.rust-lang.org/ as implementation language guarantees high performance, low resource usage, and a safe, high quality code base. It features strong encryption done on the client side. Thus, it?s possible to backup data to not fully trusted targets. Main Features Support for Proxmox VE: The Proxmox Virtual Environment is fully supported and you can easily backup virtual machines (supporting QEMU dirty bitmaps - https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. Performance: The whole software stack is written in Rust https://www.rust-lang.org/, to provide high speed and memory efficiency. Deduplication: Periodic backups produce large amounts of duplicate data. The deduplication layer avoids redundancy and minimizes the used storage space. Incremental backups: Changes between backups are typically low. Reading and sending only the delta reduces storage and network impact of backups. Data Integrity: The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum algorithm assures the accuracy and consistency of your backups. Remote Sync: It is possible to efficiently synchronize data to remote sites. Only deltas containing new data are transferred. Compression: The ultra fast Zstandard compression is able to compress several gigabytes of data per second. Encryption: Backups can be encrypted on the client-side using AES-256 in Galois/Counter Mode (GCM https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated encryption mode provides very high performance on modern hardware. Web interface: Manage Proxmox backups with the integrated web-based user interface. Open Source: No secrets. Proxmox Backup Server is free and open-source software. The source code is licensed under AGPL, v3. Support: Enterprise support will be available from Proxmox. And of course - Backups can be restored! Release notes https://pbs.proxmox.com/wiki/index.php/Roadmap Download https://www.proxmox.com/downloads Alternate ISO download: http://download.proxmox.com/iso Documentation https://pbs.proxmox.com Community Forum https://forum.proxmox.com Source Code https://git.proxmox.com Bugtracker https://bugzilla.proxmox.com FAQ Q: How does this integrate into Proxmox VE? A: Just add your Proxmox Backup Server storage as new storage backup target to your Proxmox VE. Make sure that you have at least pve-manager 6.2-9 installed. Q: What will happen with the existing Proxmox VE backup (vzdump)? A: You can still use vzdump. The new backup is an additional but very powerful way to backup and restore your VMs and container. Q: Can I already backup my other Debian servers (file backup agent)? A: Yes, just install the Proxmox Backup Client (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). Q: Are there already backup agents for other distributions? A: Not packaged yet, but using a statically linked binary should work in most cases on modern Linux OS (work in progress). Q: Is there any recommended server hardware for the Proxmox Backup Server? A: Use enterprise class server hardware with enough disks for the (big) ZFS pool holding your backup data. The Backup Server should be in the same datacenter as your Proxmox VE hosts. Q: Where can I get more information about coming feature updates? A: Follow the announcement forum, pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news. Please help us reaching the final release date by testing this beta and by providing feedback via https://forum.proxmox.com -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From lindsay.mathieson at gmail.com Fri Jul 10 17:59:37 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 01:59:37 +1000 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: Have been reading through the PDF docs, concise and well written, thanks. -- Lindsay From iztok.gregori at elettra.eu Fri Jul 10 18:29:22 2020 From: iztok.gregori at elettra.eu (Iztok Gregori) Date: Fri, 10 Jul 2020 18:29:22 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <39487713.486.1594395087264@webmail.proxmox.com> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> <39487713.486.1594395087264@webmail.proxmox.com> Message-ID: <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> On 10/07/20 17:31, Dietmar Maurer wrote: >> On 10/07/20 15:41, Dietmar Maurer wrote: >>>> Are you planning to support also CEPH (or other distributed file >>>> systems) as destination storage backend? >>> >>> It is already possible to put the datastore a a mounted cephfs, or >>> anything you can mount on the host. >> >> Is this "mount" managed by PBS or you have to "manually" mount it >> outside PBS? > > Not sure what kind of management you need for that? Usually people > mount filesystems using /etc/fstab or by creating systemd mount units. In PVE you can add a storage (like NFS for example) via GUI (or directly via config file) and, if I'm not mistaken, from the PVE will "manage" the storage (mount it under /mnt/pve, not performing a backup if the storage is not ready and so on). > >>> But this means that you copy data over the network multiple times, >>> so this is not the best option performance wise... >> >> True, PBS will act as a gateway to the backing storage cluster, but the >> data will be only re-routed to the final destination (in this case and >> OSD) not copied over (putting aside the CEPH replication policy). > > That is probably a very simplistic view of things. It involves copying data > multiple times, so I will affect performance by sure. The replication you mean? Yes, it "copies"/distribute the same data on multiple targets/disk (more or less the same RAID or ZFS does). But I'm not aware of the internals of PBS so maybe my reasoning is really to simplistic. > > Note: We take about huge amounts of data. We daily backup with vzdump over NFS 2TB of data. Clearly because all of the backups are full backups we need a lot of space for keeping a reasonable retention (8 daily backups + 3 weekly). I resorted to cycle to 5 relatively huge NFS server, but it involved a complex backup-schedule. But because the amount of data is growing we are searching for a backup solution which can be integrated in PVE and could be easily expandable. > >> So >> performance wise you are limited by the bandwidth of the PBS network >> interfaces (as you will be for a local network storage server) and to >> the speed of the backing CEPH cluster. Maybe you will loose something on >> raw performance (but depending on the CEPH cluster you could gain also >> something) but you will gain the ability of "easily" expandable storage >> space and no single point of failure. > > Sure, that's true. Would be interesting to get some performance stats for > such setup... You mean performance stats about CEPH or about PBS backed with CEPHfs? For the latter we could try something in Autumn when some servers will became available. Cheers Iztok Gregori From dietmar at proxmox.com Fri Jul 10 18:32:18 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 18:32:18 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: Message-ID: <1907249309.493.1594398738967@webmail.proxmox.com> > I was planning on doing something like described below, and I'm wondering if PBS can do this. > > We have one Hypervisor (we are very small) > > I have two ZFS storage pools. > dpool- for running VM's > bpool- for backing up our VM's > both are SAS attached HDD's. > > Right now I use pve-zsync to backup the VM's to bpool on a 15 minute, daily, weekly, and monthly basis. > > I WANT to send the weekly snapshots to an offsite pool. I was going to use zfs send to do this. > > Can PBS backup to our local zfs pool, that works > and then sync to the remote server. if the remote site is a proxmox backup server > If so, does it use zfs send? no. > Finally, can I somehow move the snapshots that pve-zsync is currently creating to PBS? Versus destroying them and starting over again? We currently do not have any tools for pve-zsync/proxmox-backup-server interaction. So far, I though those are complete different concepts... From dietmar at proxmox.com Fri Jul 10 18:46:49 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Fri, 10 Jul 2020 18:46:49 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> <39487713.486.1594395087264@webmail.proxmox.com> <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> Message-ID: <1901933384.494.1594399610529@webmail.proxmox.com> > >> Is this "mount" managed by PBS or you have to "manually" mount it > >> outside PBS? > > > > Not sure what kind of management you need for that? Usually people > > mount filesystems using /etc/fstab or by creating systemd mount units. > > In PVE you can add a storage (like NFS for example) via GUI (or directly > via config file) and, if I'm not mistaken, from the PVE will "manage" > the storage (mount it under /mnt/pve, not performing a backup if the > storage is not ready and so on). Ah, yes. We currectly restrict ourself to local disks (because of the performance implication). > >>> But this means that you copy data over the network multiple times, > >>> so this is not the best option performance wise... > >> > >> True, PBS will act as a gateway to the backing storage cluster, but the > >> data will be only re-routed to the final destination (in this case and > >> OSD) not copied over (putting aside the CEPH replication policy). > > > > That is probably a very simplistic view of things. It involves copying data > > multiple times, so I will affect performance by sure. > > The replication you mean? Yes, it "copies"/distribute the same data on > multiple targets/disk (more or less the same RAID or ZFS does). But I'm > not aware of the internals of PBS so maybe my reasoning is really to > simplistic. > > > > > Note: We take about huge amounts of data. > > We daily backup with vzdump over NFS 2TB of data. Clearly because all of > the backups are full backups we need a lot of space for keeping a > reasonable retention (8 daily backups + 3 weekly). I resorted to cycle > to 5 relatively huge NFS server, but it involved a complex > backup-schedule. But because the amount of data is growing we are > searching for a backup solution which can be integrated in PVE and could > be easily expandable. I would start using proxmox-backup server the way it is designed for, using a local zfs storage pool for the backups. This is high performance and future proof. To get redundancy, you can use a second backup server and sync the backups. This is also much simpler to recover things, because there is no need to get ceph storage online first (Always plan for recovery..). But sure, you can also use cepfs if it meets your performance requirements and you have enough network bandwidth. From devzero at web.de Fri Jul 10 19:31:10 2020 From: devzero at web.de (Roland) Date: Fri, 10 Jul 2020 19:31:10 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <57e9a9be-0907-d0c8-4ff7-8015c1585584@web.de> References: <29674be5-026b-0a89-29ba-3951a99048b1@web.de> <1935605059.460.1594382980972@webmail.proxmox.com> <147b5bcd-ca6d-dd89-7389-d9a0e5de4726@web.de> <89ad8a6e-4d67-4782-9c0c-e403f7bde090@proxmox.com> <883f66d0-cee9-fa5b-e914-fb8cc63bb5f9@web.de> <8433ecf2-0a9e-6a2f-2c5e-08a2d6967190@proxmox.com> <57e9a9be-0907-d0c8-4ff7-8015c1585584@web.de> Message-ID: <10bb508e-3b24-3d1e-e15f-33fc0b5d9eac@web.de> works like a charm. 2 seconds for finishing an incremental backup job. works with qcow2, works with zvol. (did not test restore yet) I'm impressed.? congratulations! roland INFO: starting new backup job: vzdump 101 --node pve1.local --storage pbs.local --quiet 1 --mailnotification always --all 0 --compress zstd --mode snapshot INFO: Starting Backup of VM 101 (qemu) INFO: Backup started at 2020-07-10 19:16:03 INFO: status = running INFO: VM Name: grafana.local INFO: include disk 'scsi0' 'local-zfs-files:101/vm-101-disk-0.qcow2' 20G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/101/2020-07-10T17:16:03Z' INFO: issuing guest-agent 'fs-freeze' command INFO: issuing guest-agent 'fs-thaw' command INFO: started backup task '5a0a0ef3-2802-42e0-acc3-06147ad1549f' INFO: resuming VM again INFO: using fast incremental mode (dirty-bitmap), 48.0 MiB dirty of 20.0 GiB total INFO: status: 100% (48.0 MiB of 48.0 MiB), duration 1, read: 48.0 MiB/s, write: 48.0 MiB/s INFO: backup was done incrementally, reused 19.95 GiB (99%) INFO: transferred 48.00 MiB in 1 seconds (48.0 MiB/s) INFO: run: /usr/bin/proxmox-backup-client prune vm/101 --quiet 1 --keep-last 2 --repository root at pam@172.16.37.106:ds_backup1 INFO: vm/101/2020-07-10T17:13:29Z Fri Jul 10 19:13:29 2020 remove INFO: Finished Backup of VM 101 (00:00:02) INFO: Backup finished at 2020-07-10 19:16:05 INFO: Backup job finished successfully TASK OK Am 10.07.20 um 16:46 schrieb Roland: > wo this is great to hear, thanks ! > > Am 10.07.20 um 16:15 schrieb Thomas Lamprecht: >> On 10.07.20 16:06, Roland wrote: >>> i think there may be a misunderstanding here or i was not clear enough >>> to express what i meant. >>> >>> i guess in terms of backup storage,? pbs is doing similar to what >>> borgbackup does - so indeed that IS i/o and storage effient , but that >>> refers to the backup target side. >>> >>> but what about the backup source? >>> >>> I was referring to VMware cbt as that is a means of avoiding I/O on the >>> VM storage, i.e. the backup source. >>> >>> afaik, proxmox/kvm does not (yet) have something like that !? >> Proxmox Backup Server and Proxmox VE supports tracking what changed with >> dirty-bitmaps, this avoids reading anything from the storage and sending >> anything over the network that has not changed. >> >>> I you have lot's of terabytes of VM disks, each incremental backup run >>> will hog the VMs storage (the same like full backup). >>> >>> In VMware, this is adressed with "changed block tracking", as a backup >>> agent can determine which blocks of a VMs disks have changed between >>> incremental backups, so it won't need to scan through the whole VMs >>> disks on each differential/incremental backup run. >> see above, we effectively support both - deduplication to reduce target >> storage impact and incremental backups to reduce source storage and >> network impact. >> >> https://pbs.proxmox.com/docs/introduction.html#main-features >> >>> see: >>> https://kb.vmware.com/s/article/1020128 >>> https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100 >>> >>> >>> i don't want to criticize proxmox, i think proxmox is fantastic, i just >>> want to know what we get ( and what we don't get). >>> >> No worries, no offense taken ;) >> >> cheers, >> Thomas >> >> From lindsay.mathieson at gmail.com Sat Jul 11 07:05:13 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 15:05:13 +1000 Subject: [PVE-User] Backup Beta - restore failing Message-ID: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> Have installed the Backup Server on a VM with storage on a NFS serve (our NAS) - not recommend I know, but its for testing. Have backups working fine, performance is ok and the diff backups are very fast. However can't get a restore to work, get this error in the web gui: /usr/bin/proxmox-backup-client restore '--crypt-mode=none' vm/311/2020-07-11T00:57:52Z index.json /var/tmp/vzdumptmp121335/index.json --repository proxmox at pbs@192.168.5.49:test' failed: exit code 255 If I run the same command from the console, I get: /usr/bin/proxmox-backup-client restore '--crypt-mode=none' vm/311/2020-07-11T00:57:52Z index.json /var/tmp/vzdumptmp121335/index.json --repository proxmox at pbs@192.168.5.49 Error: parameter verification errors parameter 'repository': value does not match the regex pattern proxmox at pbs has DatastorePowerUser rights. -- Lindsay From dietmar at proxmox.com Sat Jul 11 08:33:12 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sat, 11 Jul 2020 08:33:12 +0200 (CEST) Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> Message-ID: <643258476.506.1594449193539@webmail.proxmox.com> > Have backups working fine, performance is ok and the diff backups are > very fast. However can't get a restore to work, get this error in the > web gui: > > /usr/bin/proxmox-backup-client restore '--crypt-mode=none' > vm/311/2020-07-11T00:57:52Z index.json > /var/tmp/vzdumptmp121335/index.json --repository > proxmox at pbs@192.168.5.49:test' failed: exit code 255 > > > If I run the same command from the console, I get: > > /usr/bin/proxmox-backup-client restore '--crypt-mode=none' > vm/311/2020-07-11T00:57:52Z index.json > /var/tmp/vzdumptmp121335/index.json --repository > proxmox at pbs@192.168.5.49 > Error: parameter verification errors > > parameter 'repository': value does not match the regex pattern > > > proxmox at pbs has DatastorePowerUser rights. Seem there is a problem how we compute the repository - the datastore part is missing. It should look like :. For example, if your datastore name is 'store1': --repository proxmox at pbs@192.168.5.49:store1 You can test easily if that works with: # proxmox-backup-client snapshots --repository proxmox at pbs@192.168.5.49:store1 Please can you show post the pbs storage configuration? From lindsay.mathieson at gmail.com Sat Jul 11 09:09:43 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 17:09:43 +1000 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <643258476.506.1594449193539@webmail.proxmox.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> Message-ID: <570fbad1-ae4c-7f22-b5b3-e2bdbef13371@gmail.com> On 11/07/2020 4:33 pm, Dietmar Maurer wrote: > Please can you show post the pbs storage configuration? pbs: backupbeta ??????? datastore test ??????? server 192.168.5.49 ??????? content backup ??????? fingerprint 1c:f0:e0:76:67:a5:f3:b7:b6:2b:5c:33:xxxxxxxxx:98:8f:31:15:0b:0d:53:93:2b:2f:52 ??????? maxfiles 2 ??????? nodes vni,vnh,vnb ??????? username proxmox at pbs > Seem there is a problem how we compute the repository - the datastore part is missing. > It should look like :. For example, if your datastore > name is 'store1': > > --repositoryproxmox at pbs@192.168.5.49:store1 > > You can test easily if that works with: > > # proxmox-backup-client snapshots --repositoryproxmox at pbs@192.168.5.49:store1 ?proxmox-backup-client restore '--crypt-mode=none' vm/311/2020-07-11T00:57:52Z index.json /var/tmp/vzdumptmp121335/index.json --repository proxmox at pbs@192.168.5.49:backupbeta Processed without the param verification error and I got asked for the password, and two fingerprint verications/did I really want to continue prompts, but then it fails with: Error: HTTP Error 400 Bad Request: no permissions nb: * Std install with no custom certs, so using the default self generated certs in the proxmox cluster and backup server * Have added the pub keys to authorized_keys in both the ProxMox host and the Backup server so automatic ssh works in both directions. Should I try adding the backup storage in proxmox as a FQDN rather than a static IP? -- Lindsay From lindsay.mathieson at gmail.com Sat Jul 11 09:17:48 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 17:17:48 +1000 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <643258476.506.1594449193539@webmail.proxmox.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> Message-ID: On 11/07/2020 4:33 pm, Dietmar Maurer wrote: > Seem there is a problem how we compute the repository - the datastore part is missing. > It should look like :. For example, if your datastore > name is 'store1': > > --repositoryproxmox at pbs@192.168.5.49:store1 Actually that looks to be my fault - looking at the gui log I posted, the ":test" is in fact there, I just neglected to copy it to the command line :( Sorry. However now that side is working ok, getting: Error: HTTP Error 400 Bad Request: no permissions -- Lindsay From lindsay.mathieson at gmail.com Sat Jul 11 10:17:52 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 18:17:52 +1000 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> Message-ID: On 11/07/2020 5:17 pm, Lindsay Mathieson wrote: > > However now that side is working ok, getting: > > Error: HTTP Error 400 Bad Request: no permissions > Works when I use root at pam -- Lindsay From t.lamprecht at proxmox.com Sat Jul 11 10:24:43 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 11 Jul 2020 10:24:43 +0200 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> Message-ID: <9cbdcf30-807a-de5d-3759-6ba5e9405c0d@proxmox.com> On 11.07.20 10:17, Lindsay Mathieson wrote: > On 11/07/2020 5:17 pm, Lindsay Mathieson wrote: >> >> However now that side is working ok, getting: >> >> Error: HTTP Error 400 Bad Request: no permissions >> > Works when I use root at pam > If you add and user it starts out with no permissions, so you need to add some to make it work. https://pbs.proxmox.com/docs/administration-guide.html#access-control The simplest would probably be to give the DatastoreAdmin role on the /datastore/ path. HTH From dietmar at proxmox.com Sat Jul 11 10:28:50 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sat, 11 Jul 2020 10:28:50 +0200 (CEST) Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <570fbad1-ae4c-7f22-b5b3-e2bdbef13371@gmail.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> <570fbad1-ae4c-7f22-b5b3-e2bdbef13371@gmail.com> Message-ID: <515707352.507.1594456131110@webmail.proxmox.com> > > Please can you show post the pbs storage configuration? > > > pbs: backupbeta > datastore test > server 192.168.5.49 > content backup > fingerprint 1c:f0:e0:76:67:a5:f3:b7:b6:2b:5c:33:xxxxxxxxx:98:8f:31:15:0b:0d:53:93:2b:2f:52 > maxfiles 2 > nodes vni,vnh,vnb > username proxmox at pbs > > > > > Seem there is a problem how we compute the repository - the datastore part is missing. > > It should look like :. For example, if your datastore > > name is 'store1': > > > > --repository proxmox at pbs@192.168.5.49:store1 > > > > You can test easily if that works with: > > > > # proxmox-backup-client snapshots --repository proxmox at pbs@192.168.5.49:store1 > > > proxmox-backup-client restore '--crypt-mode=none' vm/311/2020-07-11T00:57:52Z index.json /var/tmp/vzdumptmp121335/index.json --repository proxmox at pbs@192.168.5.49:backupbeta The datastore name in 'test', so please try "proxmox at pbs@192.168.5.49:test" instead. From dietmar at proxmox.com Sat Jul 11 11:55:51 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sat, 11 Jul 2020 11:55:51 +0200 (CEST) Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <515707352.507.1594456131110@webmail.proxmox.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> <570fbad1-ae4c-7f22-b5b3-e2bdbef13371@gmail.com> <515707352.507.1594456131110@webmail.proxmox.com> Message-ID: <683361451.512.1594461352067@webmail.proxmox.com> > > proxmox-backup-client restore '--crypt-mode=none' vm/311/2020-07-11T00:57:52Z index.json /var/tmp/vzdumptmp121335/index.json --repository proxmox at pbs@192.168.5.49:backupbeta > > The datastore name in 'test', so please try "proxmox at pbs@192.168.5.49:test" instead. Please note that 'proxmox-backup-client' uses backup server datastore names, instead of pve storage names. I guess this can be confusing ... From lists at merit.unu.edu Sat Jul 11 13:03:41 2020 From: lists at merit.unu.edu (mj) Date: Sat, 11 Jul 2020 13:03:41 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <450971473.487.1594395714078@webmail.proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> Message-ID: On 7/10/20 5:41 PM, Dietmar Maurer wrote: > Also, we already have plans to add tape support, so we may support USB drives > as backup media when we implement that. But that is work for the futures ... Tape support would be truly fantastic! We are still using storix for our tape backups, and have been looking for an alternative for a couple of years now. Being able to use Proxmox Backup Server as storix replacement would be great. We have one bare-metal linux server that we are also backing up to tape using storix. I guess when adopting Proxmox Backup Server, we would need to find a new solution for that bare-metal server? (as in: Proxmox Backup Server is *only* capable to backup VM's, right..?) Proxmox Backup Server is a great addition to the proxmox line of products! Thanks a lot! MJ From t.lamprecht at proxmox.com Sat Jul 11 13:38:02 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 11 Jul 2020 13:38:02 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> Message-ID: On 11.07.20 13:03, mj wrote: > On 7/10/20 5:41 PM, Dietmar Maurer wrote: >> Also, we already have plans to add tape support, so we may support USB drives >> as backup media when we implement that. But that is work for the futures ... > > Tape support would be truly fantastic! We are still using storix for our tape backups, and have been looking for an alternative for a couple of years now. > > Being able to use Proxmox Backup Server as storix replacement would be great. > > We have one bare-metal linux server that we are also backing up to tape using storix. I guess when adopting Proxmox Backup Server, we would need to find a new solution for that bare-metal server? > > (as in: Proxmox Backup Server is *only* capable to backup VM's, right..?) Nope, can do everything[0][1][2]! You can do file-based backups also. The client is a statically linked binary and runs on every relatively current Linux with an amd64 based CPU, so it doesn't even has to be a Debian server. cheers, Thomas [0]: besides file-based backup filesystems not accessible in Linux, yet ;) [1]: https://pbs.proxmox.com/docs/administration-guide.html#creating-backups [2]: https://pbs.proxmox.com/docs/introduction.html#main-features From lindsay.mathieson at gmail.com Sat Jul 11 14:46:31 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 22:46:31 +1000 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <9cbdcf30-807a-de5d-3759-6ba5e9405c0d@proxmox.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> <9cbdcf30-807a-de5d-3759-6ba5e9405c0d@proxmox.com> Message-ID: <0b44c44e-8481-96de-ae8c-cce81b657bee@gmail.com> On 11/07/2020 6:24 pm, Thomas Lamprecht wrote: > If you add and user it starts out with no permissions, so you need to > add some to make it work. > https://pbs.proxmox.com/docs/administration-guide.html#access-control > > The simplest would probably be to give the DatastoreAdmin role on the > /datastore/ path. I did add the DatastorePowerUser role to path "/" for the user, but it didn't seem to take. After a bit of testing I found I had to switch to a different VM and back again in the GUI before it recognised the role addition. Maybe some caching at play? Restores and Show config are working now. -- Lindsay From lindsay.mathieson at gmail.com Sat Jul 11 14:47:08 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sat, 11 Jul 2020 22:47:08 +1000 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <515707352.507.1594456131110@webmail.proxmox.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> <570fbad1-ae4c-7f22-b5b3-e2bdbef13371@gmail.com> <515707352.507.1594456131110@webmail.proxmox.com> Message-ID: On 11/07/2020 6:28 pm, Dietmar Maurer wrote: > The datastore name in 'test', so please try"proxmox at pbs@192.168.5.49:test" instead. Yup, that did the trick, thanks. -- Lindsay From t.lamprecht at proxmox.com Sat Jul 11 15:01:18 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 11 Jul 2020 15:01:18 +0200 Subject: [PVE-User] Backup Beta - restore failing In-Reply-To: <0b44c44e-8481-96de-ae8c-cce81b657bee@gmail.com> References: <5ff0ca95-a2fe-7d89-3300-0a872dd7bddf@gmail.com> <643258476.506.1594449193539@webmail.proxmox.com> <9cbdcf30-807a-de5d-3759-6ba5e9405c0d@proxmox.com> <0b44c44e-8481-96de-ae8c-cce81b657bee@gmail.com> Message-ID: <59590743-4b1b-c2f8-3484-23b0393ab2c8@proxmox.com> On 11.07.20 14:46, Lindsay Mathieson wrote: > On 11/07/2020 6:24 pm, Thomas Lamprecht wrote: >> If you add and user it starts out with no permissions, so you need to >> add some to make it work. >> https://pbs.proxmox.com/docs/administration-guide.html#access-control >> >> The simplest would probably be to give the DatastoreAdmin role on the >> /datastore/ path. > > I did add the DatastorePowerUser role to path "/" for the user, but it didn't seem to take. After a bit of testing I found I had to switch to a different VM and back again in the GUI before it recognised the role addition. Maybe some caching at play? > There is a cache for permissions, so there /could/ theoretically be an issue with not invalidating immediately after some permissions where changed. > > Restores and Show config are working now. > Ok, glad to hear! From lists at merit.unu.edu Sat Jul 11 15:34:46 2020 From: lists at merit.unu.edu (mj) Date: Sat, 11 Jul 2020 15:34:46 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> Message-ID: Hi Thomas, On 7/11/20 1:38 PM, Thomas Lamprecht wrote: > Nope, can do everything[0][1][2]! You can do file-based backups also. The client > is a statically linked binary and runs on every relatively current Linux with > an amd64 based CPU, so it doesn't even has to be a Debian server. That is great. And then some follow=up questions if I may...: - I don't see any 'DR' options, right? As in: bare metal disaster recovery restores, using a recovery boot iso, and restore a system from scratch to bootable state. It's not a tool for that, right? - I guess with VMs etc, the backup will use the available VM options (ceph, zfs, lvm) to snapshot a VM, in order to get consistent backups, like the current pve backup does. But how does that work with non-VM client? (some non-VM client systems run LVM, so lvm could be used to create a snapshot and backup that, for example. Does it do that? Will my non-VM mysql backups be consistent?) - Any timeframe for adding LTO tape support..? We're really excited, and time-permitted I will try to play around with this monday/tuesday. :-) MJ From t.lamprecht at proxmox.com Sat Jul 11 15:47:47 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 11 Jul 2020 15:47:47 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> Message-ID: <5e9d8265-a255-f726-b8a1-8bd3934b58ca@proxmox.com> Hi MJ, On 11.07.20 15:34, mj wrote: > On 7/11/20 1:38 PM, Thomas Lamprecht wrote: >> Nope, can do everything[0][1][2]! You can do file-based backups also. The client >> is a statically linked binary and runs on every relatively current Linux with >> an amd64 based CPU, so it doesn't even has to be a Debian server. > > That is great. > > And then some follow=up questions if I may...: > > - I don't see any 'DR' options, right? As in: bare metal disaster recovery restores, using a recovery boot iso, and restore a system from scratch to bootable state. It's not a tool for that, right? Currently there's no such integrated tool, but honestly I do not think that would be *that* hard to make. We have a similar process in plan for VMs, i.e., boot a VM with a live system and the backup disks as read only disks plugged in. Note also that the client has already support to mount an archive of a backup locally over a FUSE filesystem implementation - maybe that would help already. > > - I guess with VMs etc, the backup will use the available VM options (ceph, zfs, lvm) to snapshot a VM, in order to get consistent backups, like the current pve backup does. Yes. > But how does that work with non-VM client? (some non-VM client systems run LVM, so lvm could be used to create a snapshot and backup that, for example. Does it do that? Will my non-VM mysql backups be consistent?) So here I do not have all details in mind but AFAIK: mot yet, it detects some file changes where inconsistencies could have happened but doesn't yet tries to detect if the underlying storage supports snapshots and uses that to get a more consistent state. For containers we do that explicit through the vzdump tooling. > > - Any timeframe for adding LTO tape support..? No, currently I do not have any, I'm afraid. > We're really excited, and time-permitted I will try to play around with this monday/tuesday. :-) > Great, hope it fits your use case(s). cheers, Thomas From dietmar at proxmox.com Sat Jul 11 16:40:04 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sat, 11 Jul 2020 16:40:04 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> Message-ID: <1514606299.525.1594478405167@webmail.proxmox.com> > But how does that work with non-VM client? (some non-VM client systems > run LVM, so lvm could be used to create a snapshot and backup that, for > example. Does it do that? Will my non-VM mysql backups be consistent?) Currently not, but there are plans to add that for ZFS (any maybe btrfs). From tonci at suma-informatika.hr Sat Jul 11 23:50:35 2020 From: tonci at suma-informatika.hr (=?UTF-8?B?VG9uxI1pIFN0aXBpxI1ldmnEhw==?=) Date: Sat, 11 Jul 2020 23:50:35 +0200 Subject: [PVE-User] pve-user Digest, Vol 148, Issue 10 In-Reply-To: References: Message-ID: <018e3d2f-cd73-34e0-7a2e-fa38e83b97da@suma-informatika.hr> Yes !!! ... this is what were waiting for, in spite of that proxmox already have pretty good out-of-the-box backup? solution, we missed? inc & diff backups ... After putting very good zfs support (since then every my prox setup has been based on ) this? is (imho) next big big step So big congrats to the team !!! Now? I will shortly describe my usual cluster setups ... and will have one question about? fitting PBS into it - my cluster -? 3 nodes ->? 2 nodes are redundant VM hosts and 3rd node is quorum and BACKUP one ! Thank to? ZFS we actually have quasi inc backup (and the very very fast one -- snapshot repl etc etc )??? This 3rd quorum/bck node has very big zfs 10 raid as backup repository ... But ... due to? zfs I can protect just one Prox host (standalone)? with other prox backup server very efficiently? and the most important benefits in scenarios above are: (imho) 1. very efficient backup (pve-zsync / storage (cluster) sync? etc ...) 2. Every backed-up VM is ready to run !? (on very that backup node ...? after snapshot cloning etc etc ) ? ... So this could be considered as DR solution also So since you are about to develop true and serious backup server (huge respect :) that can actually replace my 3rd backup node , the question would be: ?? Can PBS play quorum role ???? If yes, perfect ... I do not need 4th server , othervise? 4th server (as such) is needed in smallest HA solution ... or ... ? Can PBS "be" host also ? ... than I could install prox-qouroum VM ...? etc Thank you very much in advance BR Tonci /srda?an pozdrav / best regards / Ton?i Stipi?evi?, dipl. ing. elektr. /direktor / manager/** ** d.o.o. ltd. *podr?ka / upravljanje **IT*/?sustavima za male i srednje tvrtke/ /Small & Medium Business /*IT*//*support / management* Badali?eva 27 / 10000 Zagreb / Hrvatska ? Croatia url: www.suma-informatika.hr mob: +385 91 1234003 fax: +385 1? 5560007 On 10. 07. 2020. 13:56, pve-user-request at lists.proxmox.com wrote: > Send pve-user mailing list submissions to > pve-user at lists.proxmox.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > or, via email, send a message with subject or body 'help' to > pve-user-request at lists.proxmox.com > > You can reach the person managing the list at > pve-user-owner at lists.proxmox.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of pve-user digest..." > > > Today's Topics: > > 1. Proxmox Backup Server (beta) (Martin Maurer) > 2. Re: Proxmox Backup Server (beta) (Eneko Lacunza) > 3. Re: Proxmox Backup Server (beta) (Dietmar Maurer) > 4. Re: Proxmox Backup Server (beta) (Roland) > 5. Re: Proxmox Backup Server (beta) (Eneko Lacunza) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 10 Jul 2020 12:56:46 +0200 > From: Martin Maurer > To: PVE User List , pve-devel > , pbs-devel at lists.proxmox.com > Subject: [PVE-User] Proxmox Backup Server (beta) > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > We are proud to announce the first beta release of our new Proxmox Backup Server. > > It's an enterprise-class client-server backup software that backups virtual machines, containers, and physical hosts. It is specially optimized for the Proxmox Virtual Environment platform and allows you to backup and replicate your data securely. It provides easy management with a command line and web-based user interface, and is licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). > > Proxmox Backup Server supports incremental backups, deduplication, compression and authenticated encryption. Using Rust https://www.rust-lang.org/ as implementation language guarantees high performance, low resource usage, and a safe, high quality code base. It features strong encryption done on the client side. Thus, it?s possible to backup data to not fully trusted targets. > > Main Features > > Support for Proxmox VE: > The Proxmox Virtual Environment is fully supported and you can easily backup virtual machines (supporting QEMU dirty bitmaps - https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. > > Performance: > The whole software stack is written in Rust https://www.rust-lang.org/, to provide high speed and memory efficiency. > > Deduplication: > Periodic backups produce large amounts of duplicate data. The deduplication layer avoids redundancy and minimizes the used storage space. > > Incremental backups: > Changes between backups are typically low. Reading and sending only the delta reduces storage and network impact of backups. > > Data Integrity: > The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum algorithm assures the accuracy and consistency of your backups. > > Remote Sync: > It is possible to efficiently synchronize data to remote sites. Only deltas containing new data are transferred. > > Compression: > The ultra fast Zstandard compression is able to compress several gigabytes of data per second. > > Encryption: > Backups can be encrypted on the client-side using AES-256 in Galois/Counter Mode (GCM https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated encryption mode provides very high performance on modern hardware. > > Web interface: > Manage Proxmox backups with the integrated web-based user interface. > > Open Source: > No secrets. Proxmox Backup Server is free and open-source software. The source code is licensed under AGPL, v3. > > Support: > Enterprise support will be available from Proxmox. > > And of course - Backups can be restored! > > Release notes > https://pbs.proxmox.com/wiki/index.php/Roadmap > > Download > https://www.proxmox.com/downloads > Alternate ISO download: > http://download.proxmox.com/iso > > Documentation > https://pbs.proxmox.com > > Community Forum > https://forum.proxmox.com > > Source Code > https://git.proxmox.com > > Bugtracker > https://bugzilla.proxmox.com > > FAQ > Q: How does this integrate into Proxmox VE? > A: Just add your Proxmox Backup Server storage as new storage backup target to your Proxmox VE. Make sure that you have at least pve-manager 6.2-9 installed. > > Q: What will happen with the existing Proxmox VE backup (vzdump)? > A: You can still use vzdump. The new backup is an additional but very powerful way to backup and restore your VMs and container. > > Q: Can I already backup my other Debian servers (file backup agent)? > A: Yes, just install the Proxmox Backup Client (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). > > Q: Are there already backup agents for other distributions? > A: Not packaged yet, but using a statically linked binary should work in most cases on modern Linux OS (work in progress). > > Q: Is there any recommended server hardware for the Proxmox Backup Server? > A: Use enterprise class server hardware with enough disks for the (big) ZFS pool holding your backup data. The Backup Server should be in the same datacenter as your Proxmox VE hosts. > > Q: Where can I get more information about coming feature updates? > A: Follow the announcement forum, pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news. > > Please help us reaching the final release date by testing this beta and by providing feedback via https://forum.proxmox.com > From dietmar at proxmox.com Sun Jul 12 06:41:07 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sun, 12 Jul 2020 06:41:07 +0200 (CEST) Subject: [PVE-User] pve-user Digest, Vol 148, Issue 10 In-Reply-To: <018e3d2f-cd73-34e0-7a2e-fa38e83b97da@suma-informatika.hr> References: <018e3d2f-cd73-34e0-7a2e-fa38e83b97da@suma-informatika.hr> Message-ID: <1356413298.533.1594528868357@webmail.proxmox.com> > So since you are about to develop true and serious backup server (huge > respect :) that can actually replace my 3rd backup node , the question > would be: > > ?? Can PBS play quorum role ? Yes, because you can install proxmox backup on a proxmox ve host. From lindsay.mathieson at gmail.com Sun Jul 12 10:53:28 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sun, 12 Jul 2020 18:53:28 +1000 Subject: [PVE-User] BackupServer Feature Request - Compression display Message-ID: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> Is it possible to display the compressed size of the VM in the BackupServer and the email status report from vzdump? I know it is compressed from examining the .chunks usage, but it would be useful to see it in the UI and reports. Thanks. -- Lindsay From dietmar at proxmox.com Sun Jul 12 18:28:26 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Sun, 12 Jul 2020 18:28:26 +0200 (CEST) Subject: [PVE-User] BackupServer Feature Request - Compression display In-Reply-To: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> References: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> Message-ID: <568120523.549.1594571306775@webmail.proxmox.com> > Is it possible to display the compressed size of the VM in the > BackupServer and the email status report from vzdump? We do incremental backup using dirty bitmaps. Gathering those informations for all reused chunks would slow down the whole backup process, so I am quite unsure if we want to do that ... From lindsay.mathieson at gmail.com Mon Jul 13 03:10:40 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Mon, 13 Jul 2020 11:10:40 +1000 Subject: [PVE-User] BackupServer Feature Request - Compression display In-Reply-To: <568120523.549.1594571306775@webmail.proxmox.com> References: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> <568120523.549.1594571306775@webmail.proxmox.com> Message-ID: <4e38b093-6a1f-1a04-1ebb-dfa6014f3196@gmail.com> On 13/07/2020 2:28 am, Dietmar Maurer wrote: > We do incremental backup using dirty bitmaps. Gathering > those informations for all reused chunks would slow down > the whole backup process, so I am quite unsure if we > want to do that ... Maybe I'm missing your point, but I wasn't meaning the incremental display of read/write during backup (though I had noticed that changed), but just the end result of how much data is stored. There is no easy way to tell how much space a backup is taking on the backup server? -- Lindsay From dietmar at proxmox.com Mon Jul 13 07:59:36 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Mon, 13 Jul 2020 07:59:36 +0200 (CEST) Subject: [PVE-User] BackupServer Feature Request - Compression display In-Reply-To: <4e38b093-6a1f-1a04-1ebb-dfa6014f3196@gmail.com> References: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> <568120523.549.1594571306775@webmail.proxmox.com> <4e38b093-6a1f-1a04-1ebb-dfa6014f3196@gmail.com> Message-ID: <1368834871.556.1594619977103@webmail.proxmox.com> > Maybe I'm missing your point, but I wasn't meaning the incremental > display of read/write during backup (though I had noticed that changed), > but just the end result of how much data is stored. This would mean to touch all chunks at the server to get the compressed size. That is possible, but slow down the whole process. I guess that would remove the "wow, that was fast" effect... Maybe we can gather such stats during garbage collection... From devzero at web.de Mon Jul 13 12:15:23 2020 From: devzero at web.de (Roland) Date: Mon, 13 Jul 2020 12:15:23 +0200 Subject: [PVE-User] linux idle cpu overhead in kvm - old issue, but still there in 2020... Message-ID: <4900ba10-107e-439c-9716-6e988ae7f5ef@web.de> hello, i have found that there is an old bug still around in linux, which is causing quite an amount of unnecessary cpu consumption in kvm/proxmox, and thus, wasting precious power. i run some proxmox installations on older systems and on those, it's quite significant difference. on the slowest system, a single debian 10 VM , kvm process is at 20% cpu (VM is 100% idle) when this issue is present. if i change VMs machine type from i440fx(default) to q35 the problem goes away. the same applies when running "powertop --auto-tune" inside the guest (with i440fx type - enable autosuspend for usb-controller + tablet device). on some L5630 machine, in proxmox summary i see "CPU usage" drop from 10% to <1%. see: https://bugzilla.redhat.com/show_bug.cgi?id=478317 https://bugzilla.redhat.com/show_bug.cgi?id=949547 i guess this information could make a difference for people who run a large amount of virtual machines or use older systems/cpu's. on most recent cpu's, i think the difference is not that big. anyway, i really wonder how linux bugs have such great survival capability.... regards roland more references: https://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00149.html https://www.redhat.com/archives/vfio-users/2015-November/msg00159.html From gregor at aeppelbroe.de Mon Jul 13 13:37:18 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Mon, 13 Jul 2020 13:37:18 +0200 Subject: [PVE-User] Proxmox Backup Server - understanding backup integration to PVE, backup single VM Message-ID: <12572310.uLZWGnKmhe@ph-pc014.peiker-holding.de> Hi, No I've already an Testsetup: PVE001 as test source as an bare metal PVE setup ( On this, what mean path in the user rights definition?) *root at pve001:~# pvesm status --storage BUSTORE001* *Name Type Status Total Used Available %* *BUSTORE001 pbs active 163574580 1858948 153336856 1.14%* Could I do it like an normal backup via GUI? In the moment I got: *ERROR: VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found. ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found* From a.lauterer at proxmox.com Mon Jul 13 14:09:05 2020 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 13 Jul 2020 14:09:05 +0200 Subject: [PVE-User] Proxmox Backup Server - understanding backup integration to PVE, backup single VM In-Reply-To: <12572310.uLZWGnKmhe@ph-pc014.peiker-holding.de> References: <12572310.uLZWGnKmhe@ph-pc014.peiker-holding.de> Message-ID: On 7/13/20 1:37 PM, Gregor Burck wrote: > Hi, > > > > No I've already an Testsetup: > > > PVE001 as test source as an bare metal PVE setup > > > ( On this, what mean path in the user rights definition?) > > > *root at pve001:~# pvesm status --storage BUSTORE001* > *Name Type Status Total Used Available %* > *BUSTORE001 pbs active 163574580 1858948 153336856 1.14%* > > > Could I do it like an normal backup via GUI? In the moment I got: > > *ERROR: VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found. > ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found* Do you use the enterprise repository? You need at least the following versions or newer installed: pve-manager: 6.2-9 pve-qemu-kvm: 5.0.0-9 qemu-server: 6.2-8 These are not yet available in the enterprise repository but should be very soon AFAIK. From lindsay.mathieson at gmail.com Tue Jul 14 01:21:44 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Tue, 14 Jul 2020 09:21:44 +1000 Subject: [PVE-User] BackupServer Feature Request - Compression display In-Reply-To: <1368834871.556.1594619977103@webmail.proxmox.com> References: <78148244-01b5-0921-7d78-c4517c231e47@gmail.com> <568120523.549.1594571306775@webmail.proxmox.com> <4e38b093-6a1f-1a04-1ebb-dfa6014f3196@gmail.com> <1368834871.556.1594619977103@webmail.proxmox.com> Message-ID: <773b9d10-faba-9b13-f1bc-9fcc7b6ac6cf@gmail.com> On 13/07/2020 3:59 pm, Dietmar Maurer wrote: > This would mean to touch all chunks at the server to get the compressed > size. That is possible, but slow down the whole process. Ok, I was presuming you could just track the size as you received the backup. -- Lindsay From gregor at aeppelbroe.de Tue Jul 14 11:38:41 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Tue, 14 Jul 2020 11:38:41 +0200 Subject: [PVE-User] Proxmox Backup Server - understanding backup integration to PVE, backup single VM In-Reply-To: References: <12572310.uLZWGnKmhe@ph-pc014.peiker-holding.de> Message-ID: <4552217.GXAFRqVoOG@ph-pc014.peiker-holding.de> Hi, > Do you use the enterprise repository? No, for testing the community repository > > You need at least the following versions or newer installed: > pve-manager: 6.2-9 > pve-qemu-kvm: 5.0.0-9 > qemu-server: 6.2-8 Think I've made an dist-upgrade before testing, but after an dist-upgrade just now, I got the 6.2-9 packages and it work. Thank you Gregor From aderumier at odiso.com Tue Jul 14 16:30:45 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Tue, 14 Jul 2020 16:30:45 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1514606299.525.1594478405167@webmail.proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> Message-ID: <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> Hi, I don't have tested it yet and read the full docs, but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)? Currently in production, we are backuping like that, with incremental snapshot, we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot. The main advantage, is that we are only doing a full backup once, then incremental backups forever. (and we have checkum verifications,encryption,...) on ceph backup We can restore full block volume, but also selected files with mounting the volume with nbd. ----- Mail original ----- De: "dietmar" ?: "Proxmox VE user list" , "mj" , "Thomas Lamprecht" Envoy?: Samedi 11 Juillet 2020 16:40:04 Objet: Re: [PVE-User] Proxmox Backup Server (beta) > But how does that work with non-VM client? (some non-VM client systems > run LVM, so lvm could be used to create a snapshot and backup that, for > example. Does it do that? Will my non-VM mysql backups be consistent?) Currently not, but there are plans to add that for ZFS (any maybe btrfs). _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From t.lamprecht at proxmox.com Tue Jul 14 17:52:41 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 14 Jul 2020 17:52:41 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> Message-ID: <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> Hi, On 14.07.20 16:30, Alexandre DERUMIER wrote: > I don't have tested it yet and read the full docs, The following gives a quick overview: https://pbs.proxmox.com/docs/introduction.html#main-features > > but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)? No. ceph, or other storage snapshots, are not used for backup in PBS. > > Currently in production, we are backuping like that, with incremental snapshot, > > we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot. > > The main advantage, is that we are only doing a full backup once, then incremental backups forever. > (and we have checkum verifications,encryption,...) on ceph backup Proxmox Backup Server effectively does that too, but independent from the source storage. We always get the last backup index and only upload the chunks which changed. For running VMs dirty-bitmap is on to improve this (avoids reading of unchanged blocks) but it's only an optimization - the backup is incremental either way. > We can restore full block volume, but also selected files with mounting the volume with nbd. There's a block driver for Proxmox Backup Server, so that should work just the same way. From atilav at lightspeed.ca Tue Jul 14 18:05:43 2020 From: atilav at lightspeed.ca (Atila Vasconcelos) Date: Tue, 14 Jul 2020 09:05:43 -0700 Subject: [PVE-User] linux idle cpu overhead in kvm - old issue, but still there in 2020... In-Reply-To: <4900ba10-107e-439c-9716-6e988ae7f5ef@web.de> References: <4900ba10-107e-439c-9716-6e988ae7f5ef@web.de> Message-ID: Wow, I just tried this at my servers (very old Dell PowerEdge 2950); The results are impressive! 8o ABV On 2020-07-13 3:15 a.m., Roland wrote: > hello, > > i have found that there is an old bug still around in linux, which is > causing quite an amount of unnecessary cpu consumption in kvm/proxmox, > and thus, wasting precious power. > > i run some proxmox installations on older systems and on those, it's > quite significant difference. > > on the slowest system, a single debian 10 VM , kvm process is at 20% cpu > (VM is 100% idle) when this issue is present. > > if i change VMs machine type from i440fx(default) to q35 the problem > goes away. > > the same applies when running "powertop --auto-tune" inside the guest > (with i440fx type - enable autosuspend for usb-controller + tablet > device). > > on some L5630 machine, in proxmox summary i see "CPU usage" drop from > 10% to <1%. > > see: > https://bugzilla.redhat.com/show_bug.cgi?id=478317 > https://bugzilla.redhat.com/show_bug.cgi?id=949547 > > i guess this information could make a difference for people who run a > large amount of virtual machines or use older systems/cpu's. > > on most recent cpu's, i think the difference is not that big. > > anyway, i really wonder how linux bugs have such great survival > capability.... > > regards > roland > > more references: > https://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00149.html > https://www.redhat.com/archives/vfio-users/2015-November/msg00159.html > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From devzero at web.de Tue Jul 14 19:16:40 2020 From: devzero at web.de (Roland) Date: Tue, 14 Jul 2020 19:16:40 +0200 Subject: [PVE-User] linux idle cpu overhead in kvm - old issue, but still there in 2020... In-Reply-To: References: <4900ba10-107e-439c-9716-6e988ae7f5ef@web.de> Message-ID: <84b3b8ca-80f8-94d0-27c7-899743ac08e1@web.de> nice :) what guest OS do you use and which showed the problem ? as i think that q35 will not be default in kvm or proxmox anytime soon, shouldn't we perhaps file a bug report for every distro affected ? is there someone who likes to work with this (help testing, writing/tracking bug reports...) ? could perhaps save some tons of CO2 .... roland Am 14.07.20 um 18:05 schrieb Atila Vasconcelos: > Wow, I just tried this at my servers (very old Dell PowerEdge 2950); > > The results are impressive! > > 8o > > > ABV > > > On 2020-07-13 3:15 a.m., Roland wrote: >> hello, >> >> i have found that there is an old bug still around in linux, which is >> causing quite an amount of unnecessary cpu consumption in kvm/proxmox, >> and thus, wasting precious power. >> >> i run some proxmox installations on older systems and on those, it's >> quite significant difference. >> >> on the slowest system, a single debian 10 VM , kvm process is at 20% cpu >> (VM is 100% idle) when this issue is present. >> >> if i change VMs machine type from i440fx(default) to q35 the problem >> goes away. >> >> the same applies when running "powertop --auto-tune" inside the guest >> (with i440fx type - enable autosuspend for usb-controller + tablet >> device). >> >> on some L5630 machine, in proxmox summary i see "CPU usage" drop from >> 10% to <1%. >> >> see: >> https://bugzilla.redhat.com/show_bug.cgi?id=478317 >> https://bugzilla.redhat.com/show_bug.cgi?id=949547 >> >> i guess this information could make a difference for people who run a >> large amount of virtual machines or use older systems/cpu's. >> >> on most recent cpu's, i think the difference is not that big. >> >> anyway, i really wonder how linux bugs have such great survival >> capability.... >> >> regards >> roland >> >> more references: >> https://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00149.html >> https://www.redhat.com/archives/vfio-users/2015-November/msg00159.html >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Tue Jul 14 23:17:16 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Tue, 14 Jul 2020 23:17:16 +0200 (CEST) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> References: <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> Message-ID: <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> >>Proxmox Backup Server effectively does that too, but independent from the >>source storage. We always get the last backup index and only upload the chunks >>which changed. For running VMs dirty-bitmap is on to improve this (avoids >>reading of unchanged blocks) but it's only an optimization - the backup is >>incremental either way. What happen if a vm or host crash ? (I think on clean shutdown, the dirty-bitmap is saved, but on failure ?) does it need to re-read all blocks to find diff ? or make a new full backup ? Is it possible to read files inside a vm backup, without restoring it first ? (Don't have check vma format recently, but I think it was not possible because of out of orders blocks) I really think it could be great to add some storage snapshot feature in the future. For ceph, the backup speed is really faster because it's done a bigger block than 64K. (I think it's 4MB object). and also, I really need a lot of space for my backups, and I can't fill them in a single local storage. (don't want to play with multiple datastores) Bonus, it could also be used for disaster recovery management :) But that seem really great for now, I known a lot of people that will be happy with PBS :) Congrats to all proxmox team ! ----- Mail original ----- De: "Thomas Lamprecht" ?: "aderumier" , "Proxmox VE user list" Envoy?: Mardi 14 Juillet 2020 17:52:41 Objet: Re: [PVE-User] Proxmox Backup Server (beta) Hi, On 14.07.20 16:30, Alexandre DERUMIER wrote: > I don't have tested it yet and read the full docs, The following gives a quick overview: https://pbs.proxmox.com/docs/introduction.html#main-features > > but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)? No. ceph, or other storage snapshots, are not used for backup in PBS. > > Currently in production, we are backuping like that, with incremental snapshot, > > we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot. > > The main advantage, is that we are only doing a full backup once, then incremental backups forever. > (and we have checkum verifications,encryption,...) on ceph backup Proxmox Backup Server effectively does that too, but independent from the source storage. We always get the last backup index and only upload the chunks which changed. For running VMs dirty-bitmap is on to improve this (avoids reading of unchanged blocks) but it's only an optimization - the backup is incremental either way. > We can restore full block volume, but also selected files with mounting the volume with nbd. There's a block driver for Proxmox Backup Server, so that should work just the same way. From mark at openvs.co.uk Tue Jul 14 23:22:18 2020 From: mark at openvs.co.uk (Mark Adams) Date: Tue, 14 Jul 2020 22:22:18 +0100 Subject: New list for PBS? Message-ID: Hi All, First of all awesome release of PBS proxmox folks - This is something that is really needed for a lot of people. My simple question for this email is, Are you going to create a new mailing list for this? It seems to me that you should, as it is a separate "product" that should have it's own focus. I for one, would prefer to focus on pve on this list. Well done again, and thanks for all the work you do! Regards, Mark From t.lamprecht at proxmox.com Wed Jul 15 06:52:51 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Wed, 15 Jul 2020 06:52:51 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> References: <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> Message-ID: On 14.07.20 23:17, Alexandre DERUMIER wrote: >>> Proxmox Backup Server effectively does that too, but independent from the >>> source storage. We always get the last backup index and only upload the chunks >>> which changed. For running VMs dirty-bitmap is on to improve this (avoids >>> reading of unchanged blocks) but it's only an optimization - the backup is >>> incremental either way. > > What happen if a vm or host crash ? (I think on clean shutdown, the dirty-bitmap is saved, but on failure ?) > does it need to re-read all blocks to find diff ? or make a new full backup ? There's never a new "full backup" as long as the PBS has at least one. But yes, it needs to re-read everything to get the diff for the first backup after the VM process starts, from then the tracking is active again. > > Is it possible to read files inside a vm backup, without restoring it first ? > (Don't have check vma format recently, but I think it was not possible because of out of orders blocks) There's support for block and file level backup, CTs are using a file level backup, you can then even browse the backup on the server (if it's not encrypted) As said, there's a block backend driver for it in QEMU, Stefan made it with Dietmar's libproxmox-backup-qemu0 library. So you should be able to get a backup as block device over NBD and mount it, I guess. (did not tried that yet fully myself). > > I really think it could be great to add some storage snapshot feature in the future. The storage would need to allow us diffing from the outside between the previous snapshot and the current state though, not sure where that's possible in such away that it could be integrated into PBS in a reasonable way. The ceph RBD diff format wouldn't seem to bad, though: https://docs.ceph.com/docs/master/dev/rbd-diff/ > For ceph, the backup speed is really faster because it's done a bigger block than 64K. (I think it's 4MB object). We use 4MiB chunks for block-level backup by default too, for file-level they're dynamic and scale between 64KiB and 4MiB. > and also, I really need a lot of space for my backups, and I can't fill them in a single local storage. (don't want to play with multiple datastores) What are your (rough) space requirements? You could always attach a big CephFS or RBD device with local FS as a storage too. Theoretically PBS could live on your separate "backup only" ceph cluster node, or be directly attached to it over 25 to 100G. > Bonus, it could also be used for disaster recovery management :) Something like that would be nice, what's in your mind for your use case? From t.lamprecht at proxmox.com Wed Jul 15 07:30:55 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Wed, 15 Jul 2020 07:30:55 +0200 Subject: [PVE-User] New list for PBS? In-Reply-To: References: Message-ID: <99214aeb-3104-7e0b-224a-8c157fdbb057@proxmox.com> Hi, On 14.07.20 23:22, Mark Adams wrote: > My simple question for this email is, Are you going to create a new mailing > list for this? It seems to me that you should, as it is a separate > "product" that should have it's own focus. https://lists.proxmox.com/cgi-bin/mailman/listinfo There's already pbs-devel for Development discussion, a user list is not yet existent. > I for one, would prefer to focus on pve on this list. As, especially initially, the prime use case for Proxmox Backup Server will be in combination with Proxmox VE, it may often be a fine line to where a mail would belong, some would maybe even address both lists. The beta announcement on this list gathered naturally quite a few inquiries and discussions not always related to Proxmox VE directly, but that was to be expected. I think for the initial beta we'll observe how many PBS-only threads will be made and use that to decide if it's own user list is warranted. cheers, Thomas From mark at openvs.co.uk Wed Jul 15 12:22:03 2020 From: mark at openvs.co.uk (Mark Adams) Date: Wed, 15 Jul 2020 11:22:03 +0100 Subject: New list for PBS? In-Reply-To: <99214aeb-3104-7e0b-224a-8c157fdbb057@proxmox.com> References: <99214aeb-3104-7e0b-224a-8c157fdbb057@proxmox.com> Message-ID: Sounds like a good idea. Thanks for your response Thomas. Cheers, Mark On Wed, 15 Jul 2020 at 06:31, Thomas Lamprecht wrote: > Hi, > > On 14.07.20 23:22, Mark Adams wrote: > > My simple question for this email is, Are you going to create a new > mailing > > list for this? It seems to me that you should, as it is a separate > > "product" that should have it's own focus. > > https://lists.proxmox.com/cgi-bin/mailman/listinfo > > There's already pbs-devel for Development discussion, a user list is not > yet > existent. > > > I for one, would prefer to focus on pve on this list. > > As, especially initially, the prime use case for Proxmox Backup Server will > be in combination with Proxmox VE, it may often be a fine line to where a > mail > would belong, some would maybe even address both lists. > > The beta announcement on this list gathered naturally quite a few inquiries > and discussions not always related to Proxmox VE directly, but that was to > be > expected. > > I think for the initial beta we'll observe how many PBS-only threads will > be > made and use that to decide if it's own user list is warranted. > > cheers, > Thomas > From christian.kraus at ckc-it.at Wed Jul 15 12:24:46 2020 From: christian.kraus at ckc-it.at (Christian Kraus) Date: Wed, 15 Jul 2020 10:24:46 +0000 Subject: [PVE-User] Proxmox Backup Server (beta) Message-ID: thanks for this great tool so far it's working probably in my testcenter what i have like to improve/change is the sync job creation wizzard/menu description for local and remote store where i would prever to have added source and destination on the line - if you sync in wrong direction your backubs are gone in 2 sec by false sync direction rg Christian Christian Kraus Inhaber CKC IT Consulting & Solutions e.U. Kirschenallee 22 2120 OBERSDORF ?sterreich Telefon: +43 (0) 680 2062952 Fax: +43 820 220262992 E-mail: christian.kraus at ckc-it.at -----Urspr?ngliche Nachricht----- Von: Martin Maurer? Gesendet: Freitag 10th Juli 2020 12:57 An: PVE User List ; pve-devel ; pbs-devel at lists.proxmox.com Betreff: [PVE-User] Proxmox Backup Server (beta) We are proud to announce the first beta release of our new Proxmox Backup Server. It's an enterprise-class client-server backup software that backups virtual machines, containers, and physical hosts. It is specially optimized for the Proxmox Virtual Environment platform and allows you to backup and replicate your data securely. It provides easy management with a command line and web-based user interface, and is licensed under the GNU Affero General Public License v3 (GNU AGPL, v3). Proxmox Backup Server supports incremental backups, deduplication, compression and authenticated encryption. Using Rust https://www.rust-lang.org/ as implementation language guarantees high performance, low resource usage, and a safe, high quality code base. It features strong encryption done on the client side. Thus, it?s possible to backup data to not fully trusted targets. Main Features Support for Proxmox VE: The Proxmox Virtual Environment is fully supported and you can easily backup virtual machines (supporting QEMU dirty bitmaps - https://www.qemu.org/docs/master/interop/bitmaps.html) and containers. Performance: The whole software stack is written in Rust https://www.rust-lang.org/, to provide high speed and memory efficiency. Deduplication: Periodic backups produce large amounts of duplicate data. The deduplication layer avoids redundancy and minimizes the used storage space. Incremental backups: Changes between backups are typically low. Reading and sending only the delta reduces storage and network impact of backups. Data Integrity: The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum algorithm assures the accuracy and consistency of your backups. Remote Sync: It is possible to efficiently synchronize data to remote sites. Only deltas containing new data are transferred. Compression: The ultra fast Zstandard compression is able to compress several gigabytes of data per second. Encryption: Backups can be encrypted on the client-side using AES-256 in Galois/Counter Mode (GCM https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated encryption mode provides very high performance on modern hardware. Web interface: Manage Proxmox backups with the integrated web-based user interface. Open Source: No secrets. Proxmox Backup Server is free and open-source software. The source code is licensed under AGPL, v3. Support: Enterprise support will be available from Proxmox. And of course - Backups can be restored! Release notes https://pbs.proxmox.com/wiki/index.php/Roadmap Download https://www.proxmox.com/downloads Alternate ISO download: http://download.proxmox.com/iso Documentation https://pbs.proxmox.com Community Forum https://forum.proxmox.com Source Code https://git.proxmox.com Bugtracker https://bugzilla.proxmox.com FAQ Q: How does this integrate into Proxmox VE? A: Just add your Proxmox Backup Server storage as new storage backup target to your Proxmox VE. Make sure that you have at least pve-manager 6.2-9 installed. Q: What will happen with the existing Proxmox VE backup (vzdump)? A: You can still use vzdump. The new backup is an additional but very powerful way to backup and restore your VMs and container. Q: Can I already backup my other Debian servers (file backup agent)? A: Yes, just install the Proxmox Backup Client (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian). Q: Are there already backup agents for other distributions? A: Not packaged yet, but using a statically linked binary should work in most cases on modern Linux OS (work in progress). Q: Is there any recommended server hardware for the Proxmox Backup Server? A: Use enterprise class server hardware with enough disks for the (big) ZFS pool holding your backup data. The Backup Server should be in the same datacenter as your Proxmox VE hosts. Q: Where can I get more information about coming feature updates? A: Follow the announcement forum, pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news. Please help us reaching the final release date by testing this beta and by providing feedback via https://forum.proxmox.com -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com _______________________________________________ pve-user mailing list pve-user at lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From danielb at numberall.com Wed Jul 15 23:36:56 2020 From: danielb at numberall.com (Daniel Bayerdorffer) Date: Wed, 15 Jul 2020 17:36:56 -0400 (EDT) Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> Message-ID: <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com> >> >> Is it possible to read files inside a vm backup, without restoring it first ? >> (Don't have check vma format recently, but I think it was not possible because of out of orders blocks) > >There's support for block and file level backup, CTs are using a file level >backup, you can then even browse the backup on the server (if it's not encrypted) > >As said, there's a block backend driver for it in QEMU, Stefan made it with >Dietmar's libproxmox-backup-qemu0 library. So you should be able to get a backup >as block device over NBD and mount it, I guess. (did not tried that yet fully >myself). I'm still wrapping my head around some of the concepts here. So sorry for the simple questions. The above is not quite clear. Can we do file by file restore from the backups and/or archives? If so, will that work on a Windows VM backup? I.E. Can I restore a file in a Windows VM? Thanks, Daniel From t.lamprecht at proxmox.com Thu Jul 16 09:33:47 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Thu, 16 Jul 2020 09:33:47 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com> References: <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com> Message-ID: On 15.07.20 23:36, Daniel Bayerdorffer wrote: >>> >>> Is it possible to read files inside a vm backup, without restoring it first ? >>> (Don't have check vma format recently, but I think it was not possible because of out of orders blocks) >> >> There's support for block and file level backup, CTs are using a file level >> backup, you can then even browse the backup on the server (if it's not encrypted) >> >> As said, there's a block backend driver for it in QEMU, Stefan made it with >> Dietmar's libproxmox-backup-qemu0 library. So you should be able to get a backup >> as block device over NBD and mount it, I guess. (did not tried that yet fully >> myself). > > > I'm still wrapping my head around some of the concepts here. So sorry for the simple questions. > > The above is not quite clear. Can we do file by file restore from the backups and/or archives? The important thing to understand is that the Proxmox Backup Server can do two different types of backup: 1) File-level backup, used for container and host backups 2) Block-based backup, used for VMs and optional any raw block device backup You can restore on a file level for file-based directly. You cannot do so yet for block-level. But, a you can get the block device state from any backup, and boot a VM with that attached (as readonly) from there you then have file access - while the basics are here, the easy integration is still missing. > If so, will that work on a Windows VM backup? I.E. Can I restore a file in a Windows VM? No, not yet, but windows file-level based support is also planned. Then you could backup from inside the VM and have file level restore or do it from outside and have the restore full backup or use a VM to do file-level restore. Hope that helps. cheers, Thomas From w.bumiller at proxmox.com Thu Jul 16 12:17:27 2020 From: w.bumiller at proxmox.com (Wolfgang Bumiller) Date: Thu, 16 Jul 2020 12:17:27 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: References: <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> Message-ID: <20200716101727.n3fueyopc3cvxgtt@olga.proxmox.com> On Wed, Jul 15, 2020 at 05:36:56PM -0400, Daniel Bayerdorffer via pve-user wrote: > Date: Wed, 15 Jul 2020 17:36:56 -0400 (EDT) > From: Daniel Bayerdorffer > To: Proxmox VE user list > Subject: Re: [PVE-User] Proxmox Backup Server (beta) > X-Mailer: Zimbra 8.8.15_GA_3955 (ZimbraWebClient - FF78 > (Win)/8.8.15_GA_3953) > > >> > >> Is it possible to read files inside a vm backup, without restoring it first ? > >> (Don't have check vma format recently, but I think it was not possible because of out of orders blocks) > > > >There's support for block and file level backup, CTs are using a file level > >backup, you can then even browse the backup on the server (if it's not encrypted) > > > >As said, there's a block backend driver for it in QEMU, Stefan made it with > >Dietmar's libproxmox-backup-qemu0 library. So you should be able to get a backup > >as block device over NBD and mount it, I guess. (did not tried that yet fully > >myself). > > > I'm still wrapping my head around some of the concepts here. So sorry for the simple questions. > > The above is not quite clear. Can we do file by file restore from the backups and/or archives? > If so, will that work on a Windows VM backup? I.E. Can I restore a file in a Windows VM? You have a) file-based backups - containers on PVE - 'host'-type backups (which are really just "arbitrary file backups") (by running $ proxmox-backup-client backup content.pxar:/directory/to/backup \ --backup-type host \ --backup-id my-important-data \ --repository user at pbs@host:datastore ) - 'host' backups made manually from *within* a running VM (contrary to popular belief, guest machines can make their own backups, too :-P) This stores a file archive on the server which can be extracted directly, mounted via fuse, or, on the GUI you can open the file browser in your web browser and download individual files. b) block-based backups - VMs on PVE This stores whole disks, and we include a way to "attach" a disk from a backup to a VM (may even be hotplugged). Just imagine this like using a .raw disk image from any other remote storage such as NFS. If your guest operating system can read it, you can do whatever from inside the guest. There's no *direct* support for extracting files from a block device. There *may* at some point be some way to do this, but we cannot make any promises there, and it may well just be some predefined VM template auto-logging into an xfce or gnome session with a file browser open ready to mount any attached disk ;-) From elacunza at binovo.es Thu Jul 16 13:09:00 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 16 Jul 2020 13:09:00 +0200 Subject: PBS beta - encrypted PVE integrated backups Message-ID: Hi, I'm playing a bit with PBS Beta, found some quirks (reported to bugzilla) but basic setup with PVE integration has been quite pleasant. I'm trying to setup encryption, but can't figure exactly how. Tried setting "encryption-key autogen" in storage.cfg of PVE, and creating key pair and importing the pub key with "proxmox-backup-client" in PVE node, but doesn't seem to encrypt the backups. Is there any doc or example for this? Thanks a lot Eneko -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From elacunza at binovo.es Thu Jul 16 13:57:59 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 16 Jul 2020 13:57:59 +0200 Subject: PBS beta - encrypted PVE integrated backups In-Reply-To: References: Message-ID: <4da35950-a20f-f42d-db14-afc1df926183@binovo.es> Hi, Found the answer in the forum: https://forum.proxmox.com/threads/how-to-do-encrypted-backups.72868/ As Wolfgang said: the file has to be manually created via `proxmox-backup-client key create --kdf=none /etc/pve/priv/storage/STORAGENAME.enc` Thanks Eneko El 16/7/20 a las 13:09, Eneko Lacunza escribi?: > Hi, > > I'm playing a bit with PBS Beta, found some quirks (reported to > bugzilla) but basic setup with PVE integration has been quite pleasant. > > I'm trying to setup encryption, but can't figure exactly how. Tried > setting "encryption-key autogen" in storage.cfg of PVE, and creating > key pair and importing the pub key with "proxmox-backup-client" in PVE > node, but doesn't seem to encrypt the backups. > > Is there any doc or example for this? > > Thanks a lot > Eneko > -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From pve at junkyard.4t2.com Thu Jul 16 15:03:49 2020 From: pve at junkyard.4t2.com (Tom Weber) Date: Thu, 16 Jul 2020 15:03:49 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> Message-ID: <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht: > > Proxmox Backup Server effectively does that too, but independent from > the > source storage. We always get the last backup index and only upload > the chunks > which changed. For running VMs dirty-bitmap is on to improve this > (avoids > reading of unchanged blocks) but it's only an optimization - the > backup is > incremental either way. So there is exactly one dirty-bitmap that get's nulled after a backup? I'm asking because I have Backup setups with 2 Backup Servers at different Locations, backing up (file-level, incremental) on odd days to server1 on even days to server2. Such a setup wouldn't work with the block level incremental backup and the dirty-bitmap for pve vms + pbs, right? Regards, Tom From mark at tuxis.nl Thu Jul 16 16:36:11 2020 From: mark at tuxis.nl (Mark Schouten) Date: Thu, 16 Jul 2020 16:36:11 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <20200716101727.n3fueyopc3cvxgtt@olga.proxmox.com> References: <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> <20200716101727.n3fueyopc3cvxgtt@olga.proxmox.com> Message-ID: <20200716143611.hedfflblkfwmr4o7@shell.tuxis.net> On Thu, Jul 16, 2020 at 12:17:27PM +0200, Wolfgang Bumiller wrote: > b) block-based backups > - VMs on PVE > > This stores whole disks, and we include a way to "attach" a disk from > a backup to a VM (may even be hotplugged). Just imagine this like > using a .raw disk image from any other remote storage such as NFS. If > your guest operating system can read it, you can do whatever from > inside the guest. When you say 'we include a way', you mean you are going to include it? This isn't yet available, right? > There's no *direct* support for extracting files from a block device. > There *may* at some point be some way to do this, but we cannot make > any promises there, and it may well just be some predefined VM template > auto-logging into an xfce or gnome session with a file browser open > ready to mount any attached disk ;-) I know UrBackup allows you to do this, even with Windows and networking enabled. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From t.lamprecht at proxmox.com Thu Jul 16 19:04:40 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Thu, 16 Jul 2020 19:04:40 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <20200716143611.hedfflblkfwmr4o7@shell.tuxis.net> References: <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <1788232040.275836.1594761436288.JavaMail.zimbra@odiso.com> <20200716101727.n3fueyopc3cvxgtt@olga.proxmox.com> <20200716143611.hedfflblkfwmr4o7@shell.tuxis.net> Message-ID: <6a1adf6a-6bfa-10c3-c484-375bd5a9fde6@proxmox.com> On 16.07.20 16:36, Mark Schouten wrote: > On Thu, Jul 16, 2020 at 12:17:27PM +0200, Wolfgang Bumiller wrote: >> b) block-based backups >> - VMs on PVE >> >> This stores whole disks, and we include a way to "attach" a disk from >> a backup to a VM (may even be hotplugged). Just imagine this like >> using a .raw disk image from any other remote storage such as NFS. If >> your guest operating system can read it, you can do whatever from >> inside the guest. > > When you say 'we include a way', you mean you are going to include it? > This isn't yet available, right? Sure is ;-) Disclaimer: not to deeply tested yet though. https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=debian/patches/pve/0044-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch;h=855cc2207aea67fab330f2a205ddb92322bd4fb0;hb=3499c5b45a30a3489ca0d688d2d391fdfe899861#l116 Short procedure example to use it could look somewhat like: # modprobe nbd # export PBS_PASSWORD='12345supersecure!!!' # export PBS_FINGERPRINT=00:b4:b5:50... (above could be also passed directly as option, but well) # proxmox-backup-client list --repository root at pam@192.168.1.10:datastore # qemu-nbd --connect=/dev/nbd0 -f raw -r pbs:repository=root at pam@192.168.1.10:datastore,snapshot=vm/108/2020-07-14T09:03:47Z,archive=drive-scsi0.img.fidx # lsblk /dev/nbd0 # mkdir /mnt/test # mount /dev/nbd0 /mnt/test From f.gruenbichler at proxmox.com Fri Jul 17 09:31:42 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Fri, 17 Jul 2020 09:31:42 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> Message-ID: <1594969118.mb1771vpca.astroid@nora.none> On July 16, 2020 3:03 pm, Tom Weber wrote: > Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht: >> >> Proxmox Backup Server effectively does that too, but independent from >> the >> source storage. We always get the last backup index and only upload >> the chunks >> which changed. For running VMs dirty-bitmap is on to improve this >> (avoids >> reading of unchanged blocks) but it's only an optimization - the >> backup is >> incremental either way. > > So there is exactly one dirty-bitmap that get's nulled after a backup? > > I'm asking because I have Backup setups with 2 Backup Servers at > different Locations, backing up (file-level, incremental) on odd days > to server1 on even days to server2. > > Such a setup wouldn't work with the block level incremental backup and > the dirty-bitmap for pve vms + pbs, right? > > Regards, > Tom right now, this would not work since for each backup, the bitmap would be invalidated since the last backup returned by the server does not match the locally stored value. theoretically we could track multiple backup storages, but bitmaps are not free and the handling would quickly become unwieldy. probably you are better off backing up to one server and syncing that to your second one - you can define both as storage on the PVE side and switch over the backup job targets if the primary one fails. theoretically[1] 1.) backup to A 2.) sync A->B 3.) backup to B 4.) sync B->A 5.) repeat works as well and keeps the bitmap valid, but you carefully need to lock-step backup and sync jobs, so it's probably less robust than: 1.) backup to A 2.) sync A->B where missing a sync is not ideal, but does not invalidate the bitmap. note that your backup will still be incremental in any case w.r.t. client <-> server traffic, the client just has to re-read all disks to decide whether it has to upload those chunks or not if the bitmap is not valid or does not exist. 1: theoretically, as you probably run into https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your backups as 'backup at pam', which is not recommended ;) From gregor at aeppelbroe.de Fri Jul 17 09:36:24 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Fri, 17 Jul 2020 09:36:24 +0200 Subject: [PVE-User] LVM Rescue Message-ID: <20200717093624.EGroupware.82nNX7q9rxnLpZ8ESiI8tX6@heim.aeppelbroe.de> Hi, first: is is alowed to post an question here and the forum? Someone will be on both, other only here or there? second: I've trouble with an not started pve server, he've problems with his lvm storage. From a live System I could see the disks. My question now is, could I copy them with dd to an external medium? Like: 'dd if=/dev/pve/vm-102-disk-0 of=/mnt/USB/image.img bs=...' The problem occure when I setup an NFS Backup Storage and made an reboot,... so there is now no vaild backup. Thank for all little help,... Gregor From ralf.storm at konzept-is.de Fri Jul 17 09:45:22 2020 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Fri, 17 Jul 2020 09:45:22 +0200 Subject: [PVE-User] LVM Rescue In-Reply-To: <20200717093624.EGroupware.82nNX7q9rxnLpZ8ESiI8tX6@heim.aeppelbroe.de> References: <20200717093624.EGroupware.82nNX7q9rxnLpZ8ESiI8tX6@heim.aeppelbroe.de> Message-ID: <5107d71f-8791-1cb8-1cc5-dd15f503fb13@konzept-is.de> Hello Gregor, yes, feel free to use dd to an external device Mit freundlichen Gr??en / With best regards Am 17.07.2020 um 09:36 schrieb Gregor Burck: > Hi, > > first: is is alowed to post an question here and the forum? Someone > will be on both, other only here or there? > > second: > > I've trouble with an not started pve server, he've problems with his > lvm storage. > From a live System I could see the disks. > > My question now is, could I copy them with dd to an external medium? > Like: 'dd if=/dev/pve/vm-102-disk-0 of=/mnt/USB/image.img bs=...' > > The problem occure when I setup an NFS Backup Storage and made an > reboot,... so there is now no vaild backup. > > Thank for all little help,... > > Gregor > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From s.ivanov at proxmox.com Fri Jul 17 10:00:07 2020 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Fri, 17 Jul 2020 10:00:07 +0200 Subject: [PVE-User] LVM Rescue In-Reply-To: <20200717093624.EGroupware.82nNX7q9rxnLpZ8ESiI8tX6@heim.aeppelbroe.de> References: <20200717093624.EGroupware.82nNX7q9rxnLpZ8ESiI8tX6@heim.aeppelbroe.de> Message-ID: <20200717100007.3bd8d43a@rosa.proxmox.com> Hi, On Fri, 17 Jul 2020 09:36:24 +0200 Gregor Burck wrote: > Hi, > > first: is is alowed to post an question here and the forum? Someone > will be on both, other only here or there? allowed - yes (both are public forums) encouraged - no - all of us here at proxmox read both and try to help out - if you post to both that leads to double effort. Of course if you don't get an answer on either of both - you can/should try to reach out via the other medium (best with referencing the forum/mailinglist thread :) > > second: > > I've trouble with an not started pve server, he've problems with his > lvm storage. > From a live System I could see the disks. > > My question now is, could I copy them with dd to an external medium? > Like: 'dd if=/dev/pve/vm-102-disk-0 of=/mnt/USB/image.img bs=...' The command looks ok - and if the disk contents are ok you should be able to use the resulting image.img as raw image for the guest (name it appropriately and put it in the right path in a configured storage) OTOH - if the contents of the LV /dev/pve/vm-102-disk-0 have a problem then copying it won't help - but it's probably the first thing I'd try in that situation. Hope that helps! stoiko > > The problem occure when I setup an NFS Backup Storage and made an > reboot,... so there is now no vaild backup. > > Thank for all little help,... > > Gregor > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From pve at junkyard.4t2.com Fri Jul 17 15:23:37 2020 From: pve at junkyard.4t2.com (Tom Weber) Date: Fri, 17 Jul 2020 15:23:37 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1594969118.mb1771vpca.astroid@nora.none> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> Message-ID: <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Gr?nbichler: > On July 16, 2020 3:03 pm, Tom Weber wrote: > > Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht: > > > > > > Proxmox Backup Server effectively does that too, but independent > > > from > > > the > > > source storage. We always get the last backup index and only > > > upload > > > the chunks > > > which changed. For running VMs dirty-bitmap is on to improve this > > > (avoids > > > reading of unchanged blocks) but it's only an optimization - the > > > backup is > > > incremental either way. > > > > So there is exactly one dirty-bitmap that get's nulled after a > > backup? > > > > I'm asking because I have Backup setups with 2 Backup Servers at > > different Locations, backing up (file-level, incremental) on odd > > days > > to server1 on even days to server2. > > > > Such a setup wouldn't work with the block level incremental backup > > and > > the dirty-bitmap for pve vms + pbs, right? > > > > Regards, > > Tom > > right now, this would not work since for each backup, the bitmap > would > be invalidated since the last backup returned by the server does not > match the locally stored value. theoretically we could track > multiple > backup storages, but bitmaps are not free and the handling would > quickly > become unwieldy. > > probably you are better off backing up to one server and syncing > that to your second one - you can define both as storage on the PVE > side > and switch over the backup job targets if the primary one fails. > > theoretically[1] > > 1.) backup to A > 2.) sync A->B > 3.) backup to B > 4.) sync B->A > 5.) repeat > > works as well and keeps the bitmap valid, but you carefully need to > lock-step backup and sync jobs, so it's probably less robust than: > > 1.) backup to A > 2.) sync A->B > > where missing a sync is not ideal, but does not invalidate the > bitmap. > > note that your backup will still be incremental in any case w.r.t. > client <-> server traffic, the client just has to re-read all disks > to > decide whether it has to upload those chunks or not if the bitmap is > not > valid or does not exist. > > 1: theoretically, as you probably run into > https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your > backups as 'backup at pam', which is not recommended ;) > thanks for the very detailed answer :) I was already thinking that this wouldn't work like my current setup. Once the bitmap on the source side of the backup gets corrupted for whatever reason, incremental wouldn't work and break. Is there some way that the system would notify such a "corrupted" bitmap? I'm thinking of a manual / test / accidential backup run to a different backup server which would / could ruin all further regular incremental backups undetected. about my setup scenario - a bit off topic - backing up to 2 different locations every other day basically doubles my backup space and reduces the risk of one failing backup server - of course by taking a 50:50 chance of needing to go back 2 days in a worst case scenario. Syncing the backup servers would require twice the space capacity (and additional bw). For now I'm just trying to understand the features and limits of pbs - which really looks nice so far! Regards, Tom From t.lamprecht at proxmox.com Fri Jul 17 19:43:29 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 17 Jul 2020 19:43:29 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> Message-ID: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> On 17.07.20 15:23, Tom Weber wrote: > Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Gr?nbichler: >> On July 16, 2020 3:03 pm, Tom Weber wrote: >>> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht: >>>> >>>> Proxmox Backup Server effectively does that too, but independent >>>> from >>>> the >>>> source storage. We always get the last backup index and only >>>> upload >>>> the chunks >>>> which changed. For running VMs dirty-bitmap is on to improve this >>>> (avoids >>>> reading of unchanged blocks) but it's only an optimization - the >>>> backup is >>>> incremental either way. >>> >>> So there is exactly one dirty-bitmap that get's nulled after a >>> backup? >>> >>> I'm asking because I have Backup setups with 2 Backup Servers at >>> different Locations, backing up (file-level, incremental) on odd >>> days >>> to server1 on even days to server2. >>> >>> Such a setup wouldn't work with the block level incremental backup >>> and >>> the dirty-bitmap for pve vms + pbs, right? >>> >>> Regards, >>> Tom >> >> right now, this would not work since for each backup, the bitmap >> would >> be invalidated since the last backup returned by the server does not >> match the locally stored value. theoretically we could track >> multiple >> backup storages, but bitmaps are not free and the handling would >> quickly >> become unwieldy. >> >> probably you are better off backing up to one server and syncing >> that to your second one - you can define both as storage on the PVE >> side >> and switch over the backup job targets if the primary one fails. >> >> theoretically[1] >> >> 1.) backup to A >> 2.) sync A->B >> 3.) backup to B >> 4.) sync B->A >> 5.) repeat >> >> works as well and keeps the bitmap valid, but you carefully need to >> lock-step backup and sync jobs, so it's probably less robust than: >> >> 1.) backup to A >> 2.) sync A->B >> >> where missing a sync is not ideal, but does not invalidate the >> bitmap. >> >> note that your backup will still be incremental in any case w.r.t. >> client <-> server traffic, the client just has to re-read all disks >> to >> decide whether it has to upload those chunks or not if the bitmap is >> not >> valid or does not exist. >> >> 1: theoretically, as you probably run into >> https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your >> backups as 'backup at pam', which is not recommended ;) >> > > thanks for the very detailed answer :) > > I was already thinking that this wouldn't work like my current setup. > > Once the bitmap on the source side of the backup gets corrupted for > whatever reason, incremental wouldn't work and break. > Is there some way that the system would notify such a "corrupted" > bitmap? > I'm thinking of a manual / test / accidential backup run to a different > backup server which would / could ruin all further regular incremental > backups undetected. If a backup fails, or the last backup index we get doesn't matches the checksum we cache in the VM QEMU process we drop the bitmap and do read everything (it's still send incremental from the index we got now), and setup a new bitmap from that point. > > > about my setup scenario - a bit off topic - backing up to 2 different > locations every other day basically doubles my backup space and reduces > the risk of one failing backup server - of course by taking a 50:50 > chance of needing to go back 2 days in a worst case scenario. > Syncing the backup servers would require twice the space capacity (and > additional bw). I do not think it would require twice as much space. You already have now twice copies of what normally would be used for a single backup target. So even if deduplication between backups is way off you'd still only need that if you sync remotes. And normally you should need less, as deduplication should reduce the per-backup server storage space and thus the doubled space usage from syncing is actually smaller than the doubled space usage from the odd/even backups - or? Note that remotes sync only the delta since last sync, so bandwidth correlates to that delta churn. And as long as that churn stays below 50% size of a full backup you still need less total bandwidth than the odd/even full-backup approach. At least if averaged over time. cheers, Thomas From devzero at web.de Sat Jul 18 15:02:38 2020 From: devzero at web.de (Roland) Date: Sat, 18 Jul 2020 15:02:38 +0200 Subject: [PVE-User] linux idle cpu overhead in kvm - old issue, but still there in 2020... In-Reply-To: <79fb6f58-0d41-80da-4bfa-2cdc344a3245@web.de> References: <4900ba10-107e-439c-9716-6e988ae7f5ef@web.de> <84b3b8ca-80f8-94d0-27c7-899743ac08e1@web.de> <79fb6f58-0d41-80da-4bfa-2cdc344a3245@web.de> Message-ID: <936489cc-8971-5e21-f25f-eafd4bb3795e@web.de> apparently, this issue will get fixed upstream (in systemd package): https://github.com/systemd/systemd/pull/353#issuecomment-658810289 https://github.com/systemd/systemd/pull/16476 roland Am 15.07.20 um 19:18 schrieb Roland: > > >if i change VMs machine type from i440fx(default) to q35 the problem > >goes away. > > > >the same applies when running "powertop --auto-tune" inside the guest > >(with i440fx type - enable autosuspend for usb-controller + tablet > device). > > third option (for VMs without gui): > > set "use tablet for pointer " to "No" in VM's options > > https://forum.proxmox.com/threads/high-cpu-usage-of-usb-tablet-device-in-debian.45307/ > > https://forum.proxmox.com/threads/use-tablet-for-pointer-option-causing-cpu-usage-on-linux.54084/ > > roland > > > Am 14.07.20 um 19:16 schrieb Roland: >> nice :) >> >> what guest OS do you use and which showed the problem ? >> >> as i think that q35 will not be default in kvm or proxmox anytime >> soon, shouldn't we perhaps file a bug report for every distro affected ? >> >> is there someone who likes to work with this (help testing, >> writing/tracking bug reports...) ? >> >> could perhaps save some tons of CO2 .... >> >> roland >> >> >> Am 14.07.20 um 18:05 schrieb Atila Vasconcelos: >>> Wow, I just tried this at my servers (very old Dell PowerEdge 2950); >>> >>> The results are impressive! >>> >>> 8o >>> >>> >>> ABV >>> >>> >>> On 2020-07-13 3:15 a.m., Roland wrote: >>>> hello, >>>> >>>> i have found that there is an old bug still around in linux, which is >>>> causing quite an amount of unnecessary cpu consumption in kvm/proxmox, >>>> and thus, wasting precious power. >>>> >>>> i run some proxmox installations on older systems and on those, it's >>>> quite significant difference. >>>> >>>> on the slowest system, a single debian 10 VM , kvm process is at >>>> 20% cpu >>>> (VM is 100% idle) when this issue is present. >>>> >>>> if i change VMs machine type from i440fx(default) to q35 the problem >>>> goes away. >>>> >>>> the same applies when running "powertop --auto-tune" inside the guest >>>> (with i440fx type - enable autosuspend for usb-controller + tablet >>>> device). >>>> >>>> on some L5630 machine, in proxmox summary i see "CPU usage" drop from >>>> 10% to <1%. >>>> >>>> see: >>>> https://bugzilla.redhat.com/show_bug.cgi?id=478317 >>>> https://bugzilla.redhat.com/show_bug.cgi?id=949547 >>>> >>>> i guess this information could make a difference for people who run a >>>> large amount of virtual machines or use older systems/cpu's. >>>> >>>> on most recent cpu's, i think the difference is not that big. >>>> >>>> anyway, i really wonder how linux bugs have such great survival >>>> capability.... >>>> >>>> regards >>>> roland >>>> >>>> more references: >>>> https://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00149.html >>>> https://www.redhat.com/archives/vfio-users/2015-November/msg00159.html >>>> >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at lists.proxmox.com >>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at lists.proxmox.com >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> From pve at junkyard.4t2.com Sat Jul 18 16:59:20 2020 From: pve at junkyard.4t2.com (Tom Weber) Date: Sat, 18 Jul 2020 16:59:20 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> Message-ID: <0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com> Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht: > On 17.07.20 15:23, Tom Weber wrote: > > thanks for the very detailed answer :) > > > > I was already thinking that this wouldn't work like my current > > setup. > > > > Once the bitmap on the source side of the backup gets corrupted for > > whatever reason, incremental wouldn't work and break. > > Is there some way that the system would notify such a "corrupted" > > bitmap? > > I'm thinking of a manual / test / accidential backup run to a > > different > > backup server which would / could ruin all further regular > > incremental > > backups undetected. > > If a backup fails, or the last backup index we get doesn't matches > the > checksum we cache in the VM QEMU process we drop the bitmap and do > read > everything (it's still send incremental from the index we got now), > and > setup a new bitmap from that point. ah, I think I start to understand (read a bit about the qemu side too now) :) So you keep some checksum/signature of a successfull backup run with the one (non-persistant) dirty bitmap in qemu. The next backup run can check this and only makes use of the bitmap if it matches else it will fall back to reading and comparing all qemu blocks against the ones in the backup - saving only the changed ones? If that's the case, it's the answer I was looking for :) > > about my setup scenario - a bit off topic - backing up to 2 > > different > > locations every other day basically doubles my backup space and > > reduces > > the risk of one failing backup server - of course by taking a 50:50 > > chance of needing to go back 2 days in a worst case scenario. > > Syncing the backup servers would require twice the space capacity > > (and > > additional bw). > > I do not think it would require twice as much space. You already have > now > twice copies of what normally would be used for a single backup > target. > So even if deduplication between backups is way off you'd still only > need > that if you sync remotes. And normally you should need less, as > deduplication should reduce the per-backup server storage space and > thus > the doubled space usage from syncing is actually smaller than the > doubled > space usage from the odd/even backups - or? First of all, that noted backup scenario was not designed for a blocklevel incremental backup like pbs is meant. I don't know yet if I'd do it like this for pbs. But it probably helps to understand why it raised the above question. If the same "area" of data changes everyday, say 1GB, and I do incremental backups and have like 10GB of space for that on 2 independent Servers. Doing that incremental Backup odd/even to those 2 Backupservers, I end up with 20 days of history whereas with 2 syncronized Backupservers only 10 days of history are possible (one could also translate this in doubled backup space ;) ). And then there are bandwith considerations between these 3 locations. > Note that remotes sync only the delta since last sync, so bandwidth > correlates > to that delta churn. And as long as that churn stays below 50% size > of a full > backup you still need less total bandwidth than the odd/even full- > backup > approach. At least if averaged over time. ohh... I think there's the misunderstanding: I wasn't talking about odd/even FULL-backups! Right now I'm doing odd/even incremental backups! Incremental against the last state of the backup server im backing up to (backing up what changed in 2 days). Best, Tom From t.lamprecht at proxmox.com Sat Jul 18 19:50:25 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 18 Jul 2020 19:50:25 +0200 Subject: [PVE-User] PBS beta - encrypted PVE integrated backups In-Reply-To: <4da35950-a20f-f42d-db14-afc1df926183@binovo.es> References: <4da35950-a20f-f42d-db14-afc1df926183@binovo.es> Message-ID: On 16.07.20 13:57, Eneko Lacunza wrote: > Found the answer in the forum: > https://forum.proxmox.com/threads/how-to-do-encrypted-backups.72868/ FYI, if you're updated to libpvestorage-perl version 6.2-5 doing the following also works: # pvesm set --encryption-key autogen Editing the storage.cfg cannot work because we do not save the file there as it is readable for all usesrs in the www-data group. We save it to the /etc/pve/priv/storage directory which is root only. cheers, Thomas > > As Wolfgang said: > > the file has to be manually created via > `proxmox-backup-client key create --kdf=none /etc/pve/priv/storage/STORAGENAME.enc` > > Thanks > Eneko > > El 16/7/20 a las 13:09, Eneko Lacunza escribi?: >> Hi, >> >> I'm playing a bit with PBS Beta, found some quirks (reported to bugzilla) but basic setup with PVE integration has been quite pleasant. >> >> I'm trying to setup encryption, but can't figure exactly how. Tried setting "encryption-key autogen" in storage.cfg of PVE, and creating key pair and importing the pub key with "proxmox-backup-client" in PVE node, but doesn't seem to encrypt the backups. >> >> Is there any doc or example for this? >> >> Thanks a lot >> Eneko >> > > From t.lamprecht at proxmox.com Sat Jul 18 20:07:39 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Sat, 18 Jul 2020 20:07:39 +0200 Subject: [PVE-User] Proxmox Backup Server (beta) In-Reply-To: <0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> <0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com> Message-ID: <9dc7fbd9-867e-ae29-bd35-541a4bb201ac@proxmox.com> On 18.07.20 16:59, Tom Weber wrote: > Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht: >> If a backup fails, or the last backup index we get doesn't matches >> the >> checksum we cache in the VM QEMU process we drop the bitmap and do >> read >> everything (it's still send incremental from the index we got now), >> and >> setup a new bitmap from that point. > > ah, I think I start to understand (read a bit about the qemu side too > now) :) > > So you keep some checksum/signature of a successfull backup run with > the one (non-persistant) dirty bitmap in qemu. > The next backup run can check this and only makes use of the bitmap if > it matches else it will fall back to reading and comparing all qemu > blocks against the ones in the backup - saving only the changed ones? Exactly. > First of all, that noted backup scenario was not designed for a > blocklevel incremental backup like pbs is meant. I don't know yet if > I'd do it like this for pbs. But it probably helps to understand why it > raised the above question. > > If the same "area" of data changes everyday, say 1GB, and I do > incremental backups and have like 10GB of space for that on 2 > independent Servers. > Doing that incremental Backup odd/even to those 2 Backupservers, I end > up with 20 days of history whereas with 2 syncronized Backupservers > only 10 days of history are possible (one could also translate this in > doubled backup space ;) ). Yeah if only the same disk blocks are touched the math would work out. But you've doubled the risk of loosing the most recent backup, that's the price. But you do you, I'd honestly just check it out and test around a bit to see how it really behaves for your use case and setup behavior and limitations. cheers, Thomas From elacunza at binovo.es Tue Jul 21 08:07:56 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 21 Jul 2020 08:07:56 +0200 Subject: PBS beta - encrypted PVE integrated backups In-Reply-To: References: <4da35950-a20f-f42d-db14-afc1df926183@binovo.es> Message-ID: <8468cf23-222f-7502-1dae-343f5915fc02@binovo.es> Thanks for the hint Thomas, now I really understand why it didn't work! :-) El 18/7/20 a las 19:50, Thomas Lamprecht escribi?: > On 16.07.20 13:57, Eneko Lacunza wrote: >> Found the answer in the forum: >> https://forum.proxmox.com/threads/how-to-do-encrypted-backups.72868/ > FYI, if you're updated to libpvestorage-perl version 6.2-5 doing the following also > works: > > # pvesm set --encryption-key autogen > > Editing the storage.cfg cannot work because we do not save the file there > as it is readable for all usesrs in the www-data group. We save it to > the /etc/pve/priv/storage directory which is root only. > > cheers, > Thomas > >> As Wolfgang said: >> >> the file has to be manually created via >> `proxmox-backup-client key create --kdf=none /etc/pve/priv/storage/STORAGENAME.enc` >> >> Thanks >> Eneko >> >> El 16/7/20 a las 13:09, Eneko Lacunza escribi?: >>> Hi, >>> >>> I'm playing a bit with PBS Beta, found some quirks (reported to bugzilla) but basic setup with PVE integration has been quite pleasant. >>> >>> I'm trying to setup encryption, but can't figure exactly how. Tried setting "encryption-key autogen" in storage.cfg of PVE, and creating key pair and importing the pub key with "proxmox-backup-client" in PVE node, but doesn't seem to encrypt the backups. >>> >>> Is there any doc or example for this? >>> >>> Thanks a lot >>> Eneko >>> >> > > -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From f.cuseo at panservice.it Tue Jul 21 08:32:55 2020 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Tue, 21 Jul 2020 08:32:55 +0200 (CEST) Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates Message-ID: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> Good morning. I have just upgraded a 6.2 cluster from 6.2.2 to 6.2.10; suddently, all my Centos 5.X guests, with virtio ethernet drivers, stop working. Changing ethernet from virtio to e1000 fix the problem. Someone else have this problem ? Something to fix and come back to virtio drivers ? (I can't upgrade those guests to newer centos). Regards, Fabrizio -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From mailinglists at lucassen.org Tue Jul 21 10:41:54 2020 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 21 Jul 2020 10:41:54 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> Message-ID: <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> On Tue, 21 Jul 2020 08:32:55 +0200 (CEST) Fabrizio Cuseo wrote: > Good morning. > I have just upgraded a 6.2 cluster from 6.2.2 to 6.2.10; suddently, > all my Centos 5.X guests, with virtio ethernet drivers, stop working. > Changing ethernet from virtio to e1000 fix the problem. > > Someone else have this problem ? Something to fix and come back to > virtio drivers ? (I can't upgrade those guests to newer centos). I noticed the same thing on pm-5.4 when upgrading from 4.15.18-27-pve to 4.15.18-30-pve. Suddenly on (old) Debian Lenny guests the virtio drivers got very unstable. I changed them to rtl8139 and it seems to be much stabler, but not resolved. I created a cronjob that stops the network, unloads the kernel modules, reloads them and restarts the network. There is definitly something wrong somewhere. R. -- richard lucassen http://contact.xaq.nl/ From mir at miras.org Tue Jul 21 11:06:27 2020 From: mir at miras.org (Michael Rasmussen) Date: Tue, 21 Jul 2020 11:06:27 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> Message-ID: <20200721110627.5433f4c0@sleipner.datanom.net> On Tue, 21 Jul 2020 10:41:54 +0200 richard lucassen wrote: > network. There is definitly something wrong somewhere. > The only thing wrong here is that you a running a version of RHEL/CentOS which is in maintenance support 2. This means there will be no more updates except critical bugs. As of March 31, 2017 it is considered in deep freeze! -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E mir datanom net https://pgp.key-server.io/pks/lookup?search=0xE501F51C mir miras org https://pgp.key-server.io/pks/lookup?search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: All intelligent species own cats. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From f.cuseo at panservice.it Tue Jul 21 12:34:39 2020 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Tue, 21 Jul 2020 12:34:39 +0200 (CEST) Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <20200721110627.5433f4c0@sleipner.datanom.net> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> Message-ID: <958467089.50148.1595327679169.JavaMail.zimbra@zimbra.panservice.it> I know that CentOS 5.X is an old release, but sometimes, if you have some customer's application, and he can't upgrade the OS due to some compatibility issue with applications, is not so simple to upgrade the OS release. I only want to tell that with the previous minor release of PVE (and qemu/kvm of course), i had no problems at all. Fabrizio ----- Il 21-lug-20, alle 11:06, Michael Rasmussen mir at miras.org ha scritto: > On Tue, 21 Jul 2020 10:41:54 +0200 > richard lucassen wrote: > >> network. There is definitly something wrong somewhere. >> > The only thing wrong here is that you a running a version of > RHEL/CentOS which is in maintenance support 2. This means there will be > no more updates except critical bugs. As of March 31, 2017 it is > considered in deep freeze! > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E > mir datanom net > https://pgp.key-server.io/pks/lookup?search=0xE501F51C > mir miras org > https://pgp.key-server.io/pks/lookup?search=0xE3E80917 > -------------------------------------------------------------- > /usr/games/fortune -es says: > All intelligent species own cats. -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From daniel at firewall-services.com Wed Jul 22 18:40:34 2020 From: daniel at firewall-services.com (Daniel Berteaud) Date: Wed, 22 Jul 2020 18:40:34 +0200 (CEST) Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? Message-ID: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> Hi I've started playing with PBS on some VM. So far, it's looking really promizing. There's one strange thing though : the percent of the dirty data. For example, I backup one VM every 2 or 3 days. It's a moderately busy server, mainly serving a MariaDB database (zabbix server + mariadb). On each backup, I get similar dirty values : INFO: using fast incremental mode (dirty-bitmap), 492.7 GiB dirty of 590.0 GiB total While I'm sur not even 10% of this has really been written. Get more or less the same problem on other VM. One which I know just sleep all day (my personnal OnlyOffice document server), and which I backup daily, and get values like : INFO: using fast incremental mode (dirty-bitmap), 5.0 GiB dirty of 10.0 GiB total Or another small one (personnal samba DC controler) : INFO: using fast incremental mode (dirty-bitmap), 13.0 GiB dirty of 20.0 GiB total The only write activity for those 2 are just a few KB or maybe MB of log lines. Respectivly 5 and 13GB of dirty blocks seems unreal. Am I the only one seeing this ? Could the dirty-bitmap mark dirty blocks without write activity somehow ? Regards, Daniel -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From elacunza at binovo.es Thu Jul 23 08:20:45 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 23 Jul 2020 08:20:45 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> Message-ID: Hi Daniel, El 22/7/20 a las 18:40, Daniel Berteaud escribi?: > I've started playing with PBS on some VM. So far, it's looking really promizing. > There's one strange thing though : the percent of the dirty data. For example, I backup one VM every 2 or 3 days. It's a moderately busy server, mainly serving a MariaDB database (zabbix server + mariadb). On each backup, I get similar dirty values : > > INFO: using fast incremental mode (dirty-bitmap), 492.7 GiB dirty of 590.0 GiB total > > While I'm sur not even 10% of this has really been written. > > Get more or less the same problem on other VM. One which I know just sleep all day (my personnal OnlyOffice document server), and which I backup daily, and get values like : > > INFO: using fast incremental mode (dirty-bitmap), 5.0 GiB dirty of 10.0 GiB total > > Or another small one (personnal samba DC controler) : > > INFO: using fast incremental mode (dirty-bitmap), 13.0 GiB dirty of 20.0 GiB total > > The only write activity for those 2 are just a few KB or maybe MB of log lines. Respectivly 5 and 13GB of dirty blocks seems unreal. > > Am I the only one seeing this ? Could the dirty-bitmap mark dirty blocks without write activity somehow ? > I have just checked our test backups. We're using encryption and log seems a bit different, but I get the following values for 3 VMs: INFO: VM Name: dns INFO: include disk 'scsi0' 'proxmox_r3_ssd:vm-110-disk-0' 5G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/110/2020-07-22T20:00:01Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption [...] INFO: backup was done incrementally, reused 8.00 MiB (0%) INFO: transferred 5.00 GiB in 61 seconds (83.9 MiB/s) INFO: VM Name: monitor INFO: include disk 'scsi0' 'proxmox_r3_ssd:vm-124-disk-0' 10G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/124/2020-07-22T20:01:03Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption [...] INFO: backup was done incrementally, reused 2.61 GiB (26%) INFO: transferred 10.00 GiB in 63 seconds (162.5 MiB/s) INFO: VM Name: monitor2 INFO: include disk 'scsi0' 'proxmox_r3_ssd:vm-149-disk-0' 10G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/149/2020-07-22T20:02:06Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption [...] INFO: backup was done incrementally, reused 112.00 MiB (1%) INFO: transferred 10.00 GiB in 69 seconds (148.4 MiB/s) All those servers have very low disk activity (monitors are nagios servers). From the logs I'd say incremental backup only reused (saved) 0%, 26% and 1% of backup size... but maybe log is wrong... ?? Cheers Eneko -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From f.gruenbichler at proxmox.com Thu Jul 23 08:43:06 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Thu, 23 Jul 2020 08:43:06 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> Message-ID: <1595486387.pi9zv7y79a.astroid@nora.none> On July 22, 2020 6:40 pm, Daniel Berteaud wrote: > Hi > > I've started playing with PBS on some VM. So far, it's looking really promizing. > There's one strange thing though : the percent of the dirty data. For example, I backup one VM every 2 or 3 days. It's a moderately busy server, mainly serving a MariaDB database (zabbix server + mariadb). On each backup, I get similar dirty values : > > INFO: using fast incremental mode (dirty-bitmap), 492.7 GiB dirty of 590.0 GiB total > > While I'm sur not even 10% of this has really been written. > > Get more or less the same problem on other VM. One which I know just sleep all day (my personnal OnlyOffice document server), and which I backup daily, and get values like : > > INFO: using fast incremental mode (dirty-bitmap), 5.0 GiB dirty of 10.0 GiB total > > Or another small one (personnal samba DC controler) : > > INFO: using fast incremental mode (dirty-bitmap), 13.0 GiB dirty of 20.0 GiB total > > The only write activity for those 2 are just a few KB or maybe MB of log lines. Respectivly 5 and 13GB of dirty blocks seems unreal. > > Am I the only one seeing this ? Could the dirty-bitmap mark dirty blocks without write activity somehow ? possibly you haven't upgraded to pve-qemu-kvm 5.0-11 (or your VM hasn't been restarted yet since the upgrade): https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=f257cc05f4fbf772cad3231021b3ce7587127a1b the bitmap has a granularity of 4MB, so depending on the activity inside you can see quite a bit of amplification. also writing and then zeroing/reverting again to the old content would leave a mark in the bitmap without permanently changing the contents. From daniel at firewall-services.com Thu Jul 23 08:53:01 2020 From: daniel at firewall-services.com (Daniel Berteaud) Date: Thu, 23 Jul 2020 08:53:01 +0200 (CEST) Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <1595486387.pi9zv7y79a.astroid@nora.none> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> Message-ID: <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> ----- Le 23 Juil 20, ? 8:43, Fabian Gr?nbichler f.gruenbichler at proxmox.com a ?crit : > possibly you haven't upgraded to pve-qemu-kvm 5.0-11 (or your VM hasn't > been restarted yet since the upgrade): > > https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=f257cc05f4fbf772cad3231021b3ce7587127a1b I'm running pve-qemu-kvm 5.0.0-11, and all the implied VM have been either (cold) rebooted, or migrated. > > the bitmap has a granularity of 4MB, so depending on the activity inside > you can see quite a bit of amplification. also writing and then > zeroing/reverting again to the old content would leave a mark in the > bitmap without permanently changing the contents. > Yes, I'd expect some amplification, but not that much. For my Zabbix server, it's nearly canceling all the benefit of using a dirty bitmap. One thing I've noted, is that I get expected values at least for one guest, running PfSense (where I get ~150MB of dirty blocks each days). Most of my other VM are Linux, I'll check if it could be related to the atime update or something Cheers, Daniel -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From mailinglists at lucassen.org Thu Jul 23 10:22:53 2020 From: mailinglists at lucassen.org (richard lucassen) Date: Thu, 23 Jul 2020 10:22:53 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> Message-ID: <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> On Tue, 21 Jul 2020 11:06:27 +0200 Michael Rasmussen via pve-user wrote: > On Tue, 21 Jul 2020 10:41:54 +0200 > richard lucassen wrote: > > > network. There is definitly something wrong somewhere. > > > The only thing wrong here is that you a running a version of > RHEL/CentOS which is in maintenance support 2. This means there will > be no more updates except critical bugs. As of March 31, 2017 it is > considered in deep freeze! I know. But sometimes you need to keep an old version and a good way to handle this is to run such an old version in a virtual environment. A vhost supplies virtual hardware and apparently this virtual hardware has changed. This is not A Good Thing IMHO. It runs well under subversion 27 of the pve kernel but has stopped under version 30. So I think this is a bug as I do not expect design changes between subversion 27 and 30. R. -- richard lucassen https://contact.xaq.nl/ From jbonor at gmail.com Thu Jul 23 11:00:55 2020 From: jbonor at gmail.com (Jorge Boncompte) Date: Thu, 23 Jul 2020 11:00:55 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> Message-ID: <3ce2e15f-9b92-64ce-af55-a21fa46dc5d6@gmail.com> El 23/7/20 a las 8:53, Daniel Berteaud escribi?: > ----- Le 23 Juil 20, ? 8:43, Fabian Gr?nbichler f.gruenbichler at proxmox.com a ?crit : > >> possibly you haven't upgraded to pve-qemu-kvm 5.0-11 (or your VM hasn't >> been restarted yet since the upgrade): >> >> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=f257cc05f4fbf772cad3231021b3ce7587127a1b > > I'm running pve-qemu-kvm 5.0.0-11, and all the implied VM have been either (cold) rebooted, or migrated. > >> >> the bitmap has a granularity of 4MB, so depending on the activity inside >> you can see quite a bit of amplification. also writing and then >> zeroing/reverting again to the old content would leave a mark in the >> bitmap without permanently changing the contents. >> > > Yes, I'd expect some amplification, but not that much. For my Zabbix server, it's nearly canceling all the benefit of using a dirty bitmap. > One thing I've noted, is that I get expected values at least for one guest, running PfSense (where I get ~150MB of dirty blocks each days). Most of my other VM are Linux, I'll check if it could be related to the atime update or something Hi, does the dirty-bitmap take somehow into account block discarding and zeroing? Because the other thing I would look for in this case is for a fstrim firing every day. > > Cheers, > Daniel > From f.gruenbichler at proxmox.com Thu Jul 23 11:34:15 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Thu, 23 Jul 2020 11:34:15 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <3ce2e15f-9b92-64ce-af55-a21fa46dc5d6@gmail.com> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> <3ce2e15f-9b92-64ce-af55-a21fa46dc5d6@gmail.com> Message-ID: <1595496811.im826y29fq.astroid@nora.none> On July 23, 2020 11:00 am, Jorge Boncompte wrote: > El 23/7/20 a las 8:53, Daniel Berteaud escribi?: >> ----- Le 23 Juil 20, ? 8:43, Fabian Gr?nbichler f.gruenbichler at proxmox.com a ?crit : >> >>> possibly you haven't upgraded to pve-qemu-kvm 5.0-11 (or your VM hasn't >>> been restarted yet since the upgrade): >>> >>> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=f257cc05f4fbf772cad3231021b3ce7587127a1b >> >> I'm running pve-qemu-kvm 5.0.0-11, and all the implied VM have been either (cold) rebooted, or migrated. >> >>> >>> the bitmap has a granularity of 4MB, so depending on the activity inside >>> you can see quite a bit of amplification. also writing and then >>> zeroing/reverting again to the old content would leave a mark in the >>> bitmap without permanently changing the contents. >>> >> >> Yes, I'd expect some amplification, but not that much. For my Zabbix server, it's nearly canceling all the benefit of using a dirty bitmap. >> One thing I've noted, is that I get expected values at least for one guest, running PfSense (where I get ~150MB of dirty blocks each days). Most of my other VM are Linux, I'll check if it could be related to the atime update or something > > Hi, does the dirty-bitmap take somehow into account block discarding > and zeroing? Because the other thing I would look for in this case is > for a fstrim firing every day. also a possible candidate. trim/discard of course changes the blocks and thus dirties the bitmap. From daniel at firewall-services.com Thu Jul 23 11:40:30 2020 From: daniel at firewall-services.com (Daniel Berteaud) Date: Thu, 23 Jul 2020 11:40:30 +0200 (CEST) Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <1595496811.im826y29fq.astroid@nora.none> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> <3ce2e15f-9b92-64ce-af55-a21fa46dc5d6@gmail.com> <1595496811.im826y29fq.astroid@nora.none> Message-ID: <189993915.79661.1595497230968.JavaMail.zimbra@fws.fr> ----- Le 23 Juil 20, ? 11:34, Fabian Gr?nbichler f.gruenbichler at proxmox.com a ?crit : > also a possible candidate. trim/discard of course changes the blocks and > thus dirties the bitmap. > Indeed, I do have a daily fstrim job running on all my Linux guests (and not on the PfSense one), that could explain it. So it would mean we have to choose between thin prov + reclaiming unused space, or efficient dirty bitmap ... I'll run some test with a weekly fstrim instead of daily. Cheers, Daniel -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From devzero at web.de Thu Jul 23 12:11:19 2020 From: devzero at web.de (Roland) Date: Thu, 23 Jul 2020 12:11:19 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> Message-ID: <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> i see that my centos 5.11 VM should have support for virtio (see below) and according to https://access.redhat.com/articles/2488201 rhel/centos supports this since 5.3. so if virtio worked before a proxmox update,? then i would call this a regression. can you tell which proxmox version worked for you with virtio ? i tested centos 5.11 installer on my older pve 6.0-4 (5.0.15-1-pve kernel) , but it does not find virtual disks. roland [root at hr-neu ~]# uname -aLinux hr-neu 2.6.18-398.el5 #1 SMP Tue Sep 16 20:50:52 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux [root at centos5 ~]# cat /etc/redhat-release CentOS release 5.11 (Final) [root at centos5 ~]# rpm -q kernel --changelog|grep virtio|grep 446214 - [xen] virtio: include headers in kernel-headers package (Eduardo Pereira Habkost ) [446214] - [xen] virtio: add PV network and block drivers for KVM (Mark McLoughlin ) [446214] - [xen] virtio: include headers in kernel-headers package (Eduardo Pereira Habkost ) [446214] - [xen] virtio: add PV network and block drivers for KVM (Mark McLoughlin ) [446214] Am 23.07.20 um 10:22 schrieb richard lucassen: > On Tue, 21 Jul 2020 11:06:27 +0200 > Michael Rasmussen via pve-user wrote: > >> On Tue, 21 Jul 2020 10:41:54 +0200 >> richard lucassen wrote: >> >>> network. There is definitly something wrong somewhere. >>> >> The only thing wrong here is that you a running a version of >> RHEL/CentOS which is in maintenance support 2. This means there will >> be no more updates except critical bugs. As of March 31, 2017 it is >> considered in deep freeze! > I know. But sometimes you need to keep an old version and a good way to > handle this is to run such an old version in a virtual environment. > A vhost supplies virtual hardware and apparently this virtual hardware > has changed. This is not A Good Thing IMHO. > > It runs well under subversion 27 of the pve kernel but has stopped > under version 30. So I think this is a bug as I do not expect design > changes between subversion 27 and 30. > > R. > From elacunza at binovo.es Thu Jul 23 13:14:25 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 23 Jul 2020 13:14:25 +0200 Subject: PVE 6.2 e1000e driver hang and node fence Message-ID: <6c43fa05-f6ee-bfac-a3d3-b5b084bf0be2@binovo.es> Hi all, In a recently (8 days ago) updated PVE 6.2 node, e1000e driver has hanged and node has been fenced and rebooted. Syslog had several instances of the following: Jul 23 13:02:21 proxmox2 kernel: [694027.049891] e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang: Jul 23 13:02:21 proxmox2 kernel: [694027.049891] TDH????????????????? <0> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] TDT????????????????? <1> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] next_to_use????????? <1> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] next_to_clean??????? <0> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] buffer_info[next_to_clean]: Jul 23 13:02:21 proxmox2 kernel: [694027.049891] time_stamp?????????? <10a5668a3> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] next_to_watch??????? <0> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] jiffies????????????? <10a566a38> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] next_to_watch.status <0> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] MAC Status???????????? <80083> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] PHY Status???????????? <796d> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] PHY 1000BASE-T Status? <3800> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] PHY Extended Status??? <3000> Jul 23 13:02:21 proxmox2 kernel: [694027.049891] PCI Status???????????? <10> root at proxmox2:~# pveversion -v proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-10 (running version: 6.2-10/a20769ed) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.44-2-pve: 5.4.44-2 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-5.3.18-2-pve: 5.3.18-2 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.9-pve1 ceph-fuse: 14.2.9-pve1 corosync: 3.0.4-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.16-pve1 libproxmox-acme-perl: 1.0.4 libpve-access-control: 6.1-2 libpve-apiclient-perl: 3.0-3 libpve-common-perl: 6.1-5 libpve-guest-common-perl: 3.1-1 libpve-http-server-perl: 3.0-6 libpve-storage-perl: 6.2-5 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.2-1 lxcfs: 4.0.3-pve3 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.2-9 pve-cluster: 6.1-8 pve-container: 3.1-11 pve-docs: 6.2-5 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-2 pve-firmware: 3.1-1 pve-ha-manager: 3.0-9 pve-i18n: 2.1-3 pve-qemu-kvm: 5.0.0-11 pve-xtermjs: 4.3.0-1 qemu-server: 6.2-10 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.4-pve1 Has anyone experienced this? Cluster has 3 almost identical nodes and only this has been affected for now... Thanks Eneko -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From mailinglists at lucassen.org Thu Jul 23 21:21:43 2020 From: mailinglists at lucassen.org (richard lucassen) Date: Thu, 23 Jul 2020 21:21:43 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> Message-ID: <20200723212143.58e743b84615aeb85a031e19@lucassen.org> On Thu, 23 Jul 2020 12:11:19 +0200 Roland wrote: I run a Debian Lenny VM: # cat /etc/issue.net Debian GNU/Linux 5.0 # modinfo virtio_net filename: /lib/modules/2.6.26-2-686/kernel/drivers/net/virtio_net.ko license: GPL description: Virtio network driver alias: virtio:d00000001v* depends: vermagic: 2.6.26-2-686 SMP mod_unload modversions 686 parm: napi_weight:int parm: csum:bool parm: gso:bool Proxmox host version 5.4: # cat /proc/version Linux version 4.15.18-30-pve (root at nora) (gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)) #1 SMP PVE 4.15.18-58 (Fri, 12 Jun 2020 13:53:01 +0200) I upgraded from 4.15.18-27-pve to 4.15.18-30-pve and then the virtio driver became very unstable. This machine has run for many years on multiple versions of Proxmox (IIRC from version 1) without any problem. R. > i see that my centos 5.11 VM should have support for virtio (see > below) and according to https://access.redhat.com/articles/2488201 > rhel/centos supports this since 5.3. > > so if virtio worked before a proxmox update,? then i would call this a > regression. > > can you tell which proxmox version worked for you with virtio ? > > i tested centos 5.11 installer on my older pve 6.0-4 (5.0.15-1-pve > kernel) , but it does not find virtual disks. > > roland > > [root at hr-neu ~]# uname -aLinux hr-neu 2.6.18-398.el5 #1 SMP Tue Sep 16 > 20:50:52 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux > > > [root at centos5 ~]# cat /etc/redhat-release > CentOS release 5.11 (Final) > [root at centos5 ~]# rpm -q kernel --changelog|grep virtio|grep 446214 > - [xen] virtio: include headers in kernel-headers package (Eduardo > Pereira Habkost ) [446214] > - [xen] virtio: add PV network and block drivers for KVM (Mark > McLoughlin ) [446214] > - [xen] virtio: include headers in kernel-headers package (Eduardo > Pereira Habkost ) [446214] > - [xen] virtio: add PV network and block drivers for KVM (Mark > McLoughlin ) [446214] > > Am 23.07.20 um 10:22 schrieb richard lucassen: > > On Tue, 21 Jul 2020 11:06:27 +0200 > > Michael Rasmussen via pve-user wrote: > > > >> On Tue, 21 Jul 2020 10:41:54 +0200 > >> richard lucassen wrote: > >> > >>> network. There is definitly something wrong somewhere. > >>> > >> The only thing wrong here is that you a running a version of > >> RHEL/CentOS which is in maintenance support 2. This means there > >> will be no more updates except critical bugs. As of March 31, 2017 > >> it is considered in deep freeze! > > I know. But sometimes you need to keep an old version and a good > > way to handle this is to run such an old version in a virtual > > environment. A vhost supplies virtual hardware and apparently this > > virtual hardware has changed. This is not A Good Thing IMHO. > > > > It runs well under subversion 27 of the pve kernel but has stopped > > under version 30. So I think this is a bug as I do not expect design > > changes between subversion 27 and 30. > > > > R. > > -- richard lucassen https://contact.xaq.nl/ From devzero at web.de Thu Jul 23 23:59:31 2020 From: devzero at web.de (Roland) Date: Thu, 23 Jul 2020 23:59:31 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> Message-ID: i just had a look on that, too - i was backing up a VM i backed up this afternoon, and for my curiosity 1GB out of 15GB was marked dirty. that looks a quite much for me for a mostly idle system, because there was definitely only a little bit of change on the system within logfiles in /var/log so i wonder what marked all that blocks dirty.... i'm also suspecting atime changes...will keep an eye on that.... regards roland Am 23.07.20 um 08:53 schrieb Daniel Berteaud: > ----- Le 23 Juil 20, ? 8:43, Fabian Gr?nbichler f.gruenbichler at proxmox.com a ?crit : > >> possibly you haven't upgraded to pve-qemu-kvm 5.0-11 (or your VM hasn't >> been restarted yet since the upgrade): >> >> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=f257cc05f4fbf772cad3231021b3ce7587127a1b > I'm running pve-qemu-kvm 5.0.0-11, and all the implied VM have been either (cold) rebooted, or migrated. > >> the bitmap has a granularity of 4MB, so depending on the activity inside >> you can see quite a bit of amplification. also writing and then >> zeroing/reverting again to the old content would leave a mark in the >> bitmap without permanently changing the contents. >> > Yes, I'd expect some amplification, but not that much. For my Zabbix server, it's nearly canceling all the benefit of using a dirty bitmap. > One thing I've noted, is that I get expected values at least for one guest, running PfSense (where I get ~150MB of dirty blocks each days). Most of my other VM are Linux, I'll check if it could be related to the atime update or something > > Cheers, > Daniel > From mark at openvs.co.uk Fri Jul 24 00:08:50 2020 From: mark at openvs.co.uk (Mark Adams) Date: Thu, 23 Jul 2020 23:08:50 +0100 Subject: [PVE-User] New list for PBS? In-Reply-To: References: <99214aeb-3104-7e0b-224a-8c157fdbb057@proxmox.com> Message-ID: Sorry it is only a week, but is this a good idea yet? I am close to unsubscribing from the pve-user list. On Wed, 15 Jul 2020, 11:22 Mark Adams via pve-user, < pve-user at lists.proxmox.com> wrote: > > > > ---------- Forwarded message ---------- > From: Mark Adams > To: Thomas Lamprecht > Cc: pve-user at lists.proxmox.com > Bcc: > Date: Wed, 15 Jul 2020 11:22:03 +0100 > Subject: Re: New list for PBS? > Sounds like a good idea. Thanks for your response Thomas. > > Cheers, > Mark > > On Wed, 15 Jul 2020 at 06:31, Thomas Lamprecht > wrote: > > > Hi, > > > > On 14.07.20 23:22, Mark Adams wrote: > > > My simple question for this email is, Are you going to create a new > > mailing > > > list for this? It seems to me that you should, as it is a separate > > > "product" that should have it's own focus. > > > > https://lists.proxmox.com/cgi-bin/mailman/listinfo > > > > There's already pbs-devel for Development discussion, a user list is not > > yet > > existent. > > > > > I for one, would prefer to focus on pve on this list. > > > > As, especially initially, the prime use case for Proxmox Backup Server > will > > be in combination with Proxmox VE, it may often be a fine line to where a > > mail > > would belong, some would maybe even address both lists. > > > > The beta announcement on this list gathered naturally quite a few > inquiries > > and discussions not always related to Proxmox VE directly, but that was > to > > be > > expected. > > > > I think for the initial beta we'll observe how many PBS-only threads will > > be > > made and use that to decide if it's own user list is warranted. > > > > cheers, > > Thomas > > > > > > ---------- Forwarded message ---------- > From: Mark Adams via pve-user > To: Thomas Lamprecht > Cc: Mark Adams , pve-user at lists.proxmox.com > Bcc: > Date: Wed, 15 Jul 2020 11:22:03 +0100 > Subject: Re: [PVE-User] New list for PBS? > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.cuseo at panservice.it Fri Jul 24 07:20:02 2020 From: f.cuseo at panservice.it (Fabrizio Cuseo) Date: Fri, 24 Jul 2020 07:20:02 +0200 (CEST) Subject: [PVE-User] Problem with ceph and unexpected clone Message-ID: <1406196509.66914.1595568002770.JavaMail.zimbra@zimbra.panservice.it> Hello; a little off-topic issue (ceph issue). I have a test cluster with last pve, ceph, bluestore, and replica 2 (not safe, i know). Due to some problems, I have an inconsistent PG and with pg repair I have a "unexepcted clone" with one object. I would like to identify the rbd image that uses this object, to delete it or restore it, but I can't find how can I have this info. PS: i also can't delete the object from the bluestore osd, because if I run the "rados list-inconsistent-obj" for the damaged pg, it returned me NO objects. Thanks, Fabrizio -- --- Fabrizio Cuseo - mailto:f.cuseo at panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:info at panservice.it Numero verde nazionale: 800 901492 From elacunza at binovo.es Fri Jul 24 08:07:34 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 24 Jul 2020 08:07:34 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <20200723212143.58e743b84615aeb85a031e19@lucassen.org> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> <20200723212143.58e743b84615aeb85a031e19@lucassen.org> Message-ID: What physical network interface make/model in Proxmox node? El 23/7/20 a las 21:21, richard lucassen escribi?: > On Thu, 23 Jul 2020 12:11:19 +0200 > Roland wrote: > > I run a Debian Lenny VM: > > # cat /etc/issue.net > Debian GNU/Linux 5.0 > > # modinfo virtio_net > filename: /lib/modules/2.6.26-2-686/kernel/drivers/net/virtio_net.ko > license: GPL > description: Virtio network driver > alias: virtio:d00000001v* > depends: > vermagic: 2.6.26-2-686 SMP mod_unload modversions 686 > parm: napi_weight:int > parm: csum:bool > parm: gso:bool > > Proxmox host version 5.4: > > # cat /proc/version > Linux version 4.15.18-30-pve (root at nora) (gcc version 6.3.0 20170516 > (Debian 6.3.0-18+deb9u1)) #1 SMP PVE 4.15.18-58 (Fri, 12 Jun 2020 > 13:53:01 +0200) > > I upgraded from 4.15.18-27-pve to 4.15.18-30-pve and then the virtio > driver became very unstable. This machine has run for many years on > multiple versions of Proxmox (IIRC from version 1) without any problem. > > R. > >> i see that my centos 5.11 VM should have support for virtio (see >> below) and according to https://access.redhat.com/articles/2488201 >> rhel/centos supports this since 5.3. >> >> so if virtio worked before a proxmox update,? then i would call this a >> regression. >> >> can you tell which proxmox version worked for you with virtio ? >> >> i tested centos 5.11 installer on my older pve 6.0-4 (5.0.15-1-pve >> kernel) , but it does not find virtual disks. >> >> roland >> >> [root at hr-neu ~]# uname -aLinux hr-neu 2.6.18-398.el5 #1 SMP Tue Sep 16 >> 20:50:52 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux >> >> >> [root at centos5 ~]# cat /etc/redhat-release >> CentOS release 5.11 (Final) >> [root at centos5 ~]# rpm -q kernel --changelog|grep virtio|grep 446214 >> - [xen] virtio: include headers in kernel-headers package (Eduardo >> Pereira Habkost ) [446214] >> - [xen] virtio: add PV network and block drivers for KVM (Mark >> McLoughlin ) [446214] >> - [xen] virtio: include headers in kernel-headers package (Eduardo >> Pereira Habkost ) [446214] >> - [xen] virtio: add PV network and block drivers for KVM (Mark >> McLoughlin ) [446214] >> >> Am 23.07.20 um 10:22 schrieb richard lucassen: >>> On Tue, 21 Jul 2020 11:06:27 +0200 >>> Michael Rasmussen via pve-user wrote: >>> >>>> On Tue, 21 Jul 2020 10:41:54 +0200 >>>> richard lucassen wrote: >>>> >>>>> network. There is definitly something wrong somewhere. >>>>> >>>> The only thing wrong here is that you a running a version of >>>> RHEL/CentOS which is in maintenance support 2. This means there >>>> will be no more updates except critical bugs. As of March 31, 2017 >>>> it is considered in deep freeze! >>> I know. But sometimes you need to keep an old version and a good >>> way to handle this is to run such an old version in a virtual >>> environment. A vhost supplies virtual hardware and apparently this >>> virtual hardware has changed. This is not A Good Thing IMHO. >>> >>> It runs well under subversion 27 of the pve kernel but has stopped >>> under version 30. So I think this is a bug as I do not expect design >>> changes between subversion 27 and 30. >>> >>> R. >>> > -- Eneko Lacunza | Tel. 943 569 206 | Email elacunza at binovo.es Director T?cnico | Site. https://www.binovo.es BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2? izda. Oficina 10-11, 20180 Oiartzun From daniel at firewall-services.com Fri Jul 24 09:02:30 2020 From: daniel at firewall-services.com (Daniel Berteaud) Date: Fri, 24 Jul 2020 09:02:30 +0200 (CEST) Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> Message-ID: <945247964.82475.1595574150577.JavaMail.zimbra@fws.fr> ----- Le 23 Juil 20, ? 23:59, Roland devzero at web.de a ?crit : > i just had a look on that, too - i was backing up a VM i backed up this > afternoon, > > and for my curiosity 1GB out of 15GB was marked dirty. > > that looks a quite much for me for a mostly idle system, because there > was definitely only a little bit of change on the system within logfiles > in /var/log > > so i wonder what marked all that blocks dirty.... > > i'm also suspecting atime changes...will keep an eye on that.... It was my daily fstrim in my case. Each time you trim, it'll dirty all the blocks corresponding to unused space. I've switched this to a weekly job so bacups can run most of the time efficiently. Since then, dirty blocks went from ~15GB per VM on average to something betwwen 800MB and 4GB, which is much closer to what I expect, considering the 4MB granularity of the bitmap. I'll check on my biggest VM (Zabbix server) but I expect similar results Thanks Fabian and Jorge for pointing this out Cheers Daniel -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From ronny+pve-user at aasen.cx Fri Jul 24 09:54:13 2020 From: ronny+pve-user at aasen.cx (Ronny Aasen) Date: Fri, 24 Jul 2020 09:54:13 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: <945247964.82475.1595574150577.JavaMail.zimbra@fws.fr> References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> <945247964.82475.1595574150577.JavaMail.zimbra@fws.fr> Message-ID: On 24.07.2020 09:02, Daniel Berteaud wrote: > ----- Le 23 Juil 20, ? 23:59, Roland devzero at web.de a ?crit : > >> i just had a look on that, too - i was backing up a VM i backed up this >> afternoon, >> >> and for my curiosity 1GB out of 15GB was marked dirty. >> >> that looks a quite much for me for a mostly idle system, because there >> was definitely only a little bit of change on the system within logfiles >> in /var/log >> >> so i wonder what marked all that blocks dirty.... >> >> i'm also suspecting atime changes...will keep an eye on that.... > > > It was my daily fstrim in my case. Each time you trim, it'll dirty all the blocks corresponding to unused space. > I've switched this to a weekly job so bacups can run most of the time efficiently. Since then, dirty blocks went from ~15GB per VM on average to something betwwen 800MB and 4GB, which is much closer to what I expect, considering the 4MB granularity of the bitmap. > > I'll check on my biggest VM (Zabbix server) but I expect similar results > > Thanks Fabian and Jorge for pointing this out > > Cheers > Daniel > would mounting the disk with discard help on this ? where it only trims blocks that are actually discarded ? instead of touching the whole disk with fstrim ? Ronny From mailinglists at lucassen.org Fri Jul 24 16:01:07 2020 From: mailinglists at lucassen.org (richard lucassen) Date: Fri, 24 Jul 2020 16:01:07 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> <20200723212143.58e743b84615aeb85a031e19@lucassen.org> Message-ID: <20200724160107.a13cc0b2ee356007e3d7b7d9@lucassen.org> On Fri, 24 Jul 2020 08:07:34 +0200 Eneko Lacunza via pve-user wrote: > What physical network interface make/model in Proxmox node? # lspci | grep -i ether 02:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) 02:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20) 07:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) 07:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) The MAC adresses point to "Hewlett-Packard" in the MAC database and I'm at 900km's from these machines so I can't tell you if it is the Intel or the Broadcom. The loaded kernel modules are e1000e and bnx2. R. > El 23/7/20 a las 21:21, richard lucassen escribi?: > > On Thu, 23 Jul 2020 12:11:19 +0200 > > Roland wrote: > > > > I run a Debian Lenny VM: > > > > # cat /etc/issue.net > > Debian GNU/Linux 5.0 > > > > # modinfo virtio_net > > filename: /lib/modules/2.6.26-2-686/kernel/drivers/net/virtio_net.ko > > license: GPL > > description: Virtio network driver > > alias: virtio:d00000001v* > > depends: > > vermagic: 2.6.26-2-686 SMP mod_unload modversions 686 > > parm: napi_weight:int > > parm: csum:bool > > parm: gso:bool > > > > Proxmox host version 5.4: > > > > # cat /proc/version > > Linux version 4.15.18-30-pve (root at nora) (gcc version 6.3.0 20170516 > > (Debian 6.3.0-18+deb9u1)) #1 SMP PVE 4.15.18-58 (Fri, 12 Jun 2020 > > 13:53:01 +0200) > > > > I upgraded from 4.15.18-27-pve to 4.15.18-30-pve and then the virtio > > driver became very unstable. This machine has run for many years on > > multiple versions of Proxmox (IIRC from version 1) without any > > problem. > > > > R. > > > >> i see that my centos 5.11 VM should have support for virtio (see > >> below) and according to https://access.redhat.com/articles/2488201 > >> rhel/centos supports this since 5.3. > >> > >> so if virtio worked before a proxmox update,? then i would call > >> this a regression. > >> > >> can you tell which proxmox version worked for you with virtio ? > >> > >> i tested centos 5.11 installer on my older pve 6.0-4 (5.0.15-1-pve > >> kernel) , but it does not find virtual disks. > >> > >> roland > >> > >> [root at hr-neu ~]# uname -aLinux hr-neu 2.6.18-398.el5 #1 SMP Tue > >> Sep 16 20:50:52 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux > >> > >> > >> [root at centos5 ~]# cat /etc/redhat-release > >> CentOS release 5.11 (Final) > >> [root at centos5 ~]# rpm -q kernel --changelog|grep virtio|grep 446214 > >> - [xen] virtio: include headers in kernel-headers package (Eduardo > >> Pereira Habkost ) [446214] > >> - [xen] virtio: add PV network and block drivers for KVM (Mark > >> McLoughlin ) [446214] > >> - [xen] virtio: include headers in kernel-headers package (Eduardo > >> Pereira Habkost ) [446214] > >> - [xen] virtio: add PV network and block drivers for KVM (Mark > >> McLoughlin ) [446214] > >> > >> Am 23.07.20 um 10:22 schrieb richard lucassen: > >>> On Tue, 21 Jul 2020 11:06:27 +0200 > >>> Michael Rasmussen via pve-user wrote: > >>> > >>>> On Tue, 21 Jul 2020 10:41:54 +0200 > >>>> richard lucassen wrote: > >>>> > >>>>> network. There is definitly something wrong somewhere. > >>>>> > >>>> The only thing wrong here is that you a running a version of > >>>> RHEL/CentOS which is in maintenance support 2. This means there > >>>> will be no more updates except critical bugs. As of March 31, > >>>> 2017 it is considered in deep freeze! > >>> I know. But sometimes you need to keep an old version and a good > >>> way to handle this is to run such an old version in a virtual > >>> environment. A vhost supplies virtual hardware and apparently this > >>> virtual hardware has changed. This is not A Good Thing IMHO. > >>> > >>> It runs well under subversion 27 of the pve kernel but has stopped > >>> under version 30. So I think this is a bug as I do not expect > >>> design changes between subversion 27 and 30. > >>> > >>> R. > >>> > > > > -- richard lucassen https://contact.xaq.nl/ From athompso at athompso.net Fri Jul 24 16:19:25 2020 From: athompso at athompso.net (Adam Thompson) Date: Fri, 24 Jul 2020 09:19:25 -0500 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: <20200724160107.a13cc0b2ee356007e3d7b7d9@lucassen.org> References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> <20200723212143.58e743b84615aeb85a031e19@lucassen.org> <20200724160107.a13cc0b2ee356007e3d7b7d9@lucassen.org> Message-ID: On 2020-07-24 09:01, richard lucassen wrote: > On Fri, 24 Jul 2020 08:07:34 +0200 > Eneko Lacunza via pve-user wrote: > >> What physical network interface make/model in Proxmox node? > > # lspci | grep -i ether > 02:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 > Gigabit Ethernet (rev 20) > [...] > The MAC adresses point to "Hewlett-Packard" in the MAC database and I'm > at 900km's from these machines so I can't tell you if it is the Intel > or > the Broadcom. The loaded kernel modules are e1000e and bnx2. Two easy ways to tell: 1. install and run "lshw", probably "lshw -c network" is most useful 2. Use the bash script in the 2nd answer at https://unix.stackexchange.com/questions/41817/linux-how-to-find-the-device-driver-used-for-a-device -Adam From mailinglists at lucassen.org Sat Jul 25 10:12:53 2020 From: mailinglists at lucassen.org (richard lucassen) Date: Sat, 25 Jul 2020 10:12:53 +0200 Subject: [PVE-User] Problem with Centos 5.X virtio ethernet drivers and last PVE updates In-Reply-To: References: <1506136562.36294.1595313175981.JavaMail.zimbra@zimbra.panservice.it> <20200721104154.0f2188d06a622942fddb7cab@lucassen.org> <20200723102253.b991fc7e31f4e92f54a87500@lucassen.org> <8e14f866-ef7a-e7d3-1bb1-36a10ea4f9cb@web.de> <20200723212143.58e743b84615aeb85a031e19@lucassen.org> <20200724160107.a13cc0b2ee356007e3d7b7d9@lucassen.org> Message-ID: <20200725101253.49f10f1ddccf71163b4fe097@lucassen.org> On Fri, 24 Jul 2020 09:19:25 -0500 Adam Thompson wrote: > 1. install and run "lshw", probably "lshw -c network" is most useful It is this one (the active slave of a bond device): --- 8< --- *-network:1 description: Ethernet interface product: 82571EB Gigabit Ethernet Controller vendor: Intel Corporation physical id: 0.1 bus info: pci at 0000:07:00.1 logical name: ens2f1 version: 06 serial: 00:26:55:ed:6f:4c size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.4.1.1-NAPI duplex=full firmware=5.11-2 latency=0 link=yes multicast=yes port=twisted pair slave=yes speed=1Gbit/s resources: irq:46 memory:fbfa0000-fbfbffff memory:fbf80000-fbf9ffff ioport:5020(size=32) memory:fbf20000-fbf3ffff --- 8< --- -- richard lucassen https://contact.xaq.nl/ From gaio at sv.lnf.it Sat Jul 25 14:52:08 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Sat, 25 Jul 2020 14:52:08 +0200 Subject: [PVE-User] Temporary add a node to a cluster... Message-ID: <20200725125208.GA24275@lilliput.linux.it> I need to do a P2V 'in place' for a server, and i can use for that a 'test cluster' (two HP Microserver) i own. But the final result would be two different clusters. Also, i don't have too much space or time, so my idea is (as just done before): 1) P2V the server in my test cluster 2) reinstall the server with pve, join to the cluster 3) migrate the VM to the server. 4) 'detach' the cluster, eg, following: https://pve.proxmox.com/wiki/Cluster_Manager delete from server my test cluster and delete from test cluster my server. But is this way the server will belong to my 'test cluster', and clusters cannot be renamed, right? There's some way to 'uncluster' a server? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From pve at hw.wewi.de Sat Jul 25 18:04:19 2020 From: pve at hw.wewi.de (Hermann) Date: Sat, 25 Jul 2020 18:04:19 +0200 Subject: [PVE-User] FC-Luns only local devices? In-Reply-To: <20200630102125.nnrf3okkiwtdmojw@zeha.at> References: <8e7395ee-c3ab-03fc-aa2d-9bf2e0781375@hw.wewi.de> <20200630102125.nnrf3okkiwtdmojw@zeha.at> Message-ID: Hello Chris, (Sorry for not answering to the list, PM was sent erroneously.) Thank you for the eye-opener. I managed to configure multipath and can see all the luns and was able to create physical volumes and so on. I assume it is correct to configure LVM as shared, because proxmox itself handles the locking. At least a test with an ubuntu-vm went well. Only problem was that I could not stop the vm from the GUI, because a lock could not be aquired. Could you explain in a short sentence, why you avoid partitioning? I remember running into difficulties with an image I had created this way in another setup. I could not change the size when it was necessary an had to delete and recreate everything, which was quite painful, because I had to move around a TB of Files. Have a nice weekend, Hermann. Am 30.06.20 um 12:21 schrieb Chris Hofstaedtler | Deduktiva: > Hi Hermann , > > * Hermann [200630 10:51]: >> I would really appreciate being steered in the right direction as to the >> connection of Fibre-Channel-Luns in Proxmox. >> >> As far as I can see, FC-LUNs only appear als local blockdevices in PVE. >> If I have several LWL-Cables between my Cluster and these bloody >> expensive Storages, do I have to set up multipath manually in debian? > With most storages you need to configure multipath itself manually, > with the settings your storage vendor hands you. > > Our setup for this is: > > 1. Manual multipath setup, we tend to enable find_multipaths "smart" > to avoid configuring all WWIDs everywhere and so on. > > 2. The LVM PVs go directly on the mpathXX devices (no partitioning). > > 3. One VG per mpath device. The VGs are then seen by Proxmox just > like always. > > You have to take great care when removing block devices again, so > all PVE nodes release the VGs, PVs, all underlying device mapper > devices, and remove the physical sdXX devices, before removing the > exports from the storage side. > Often it's easier to reboot, and during the reboot fence access to > the to-be-removed LUN for the currently rebooting host. > > Chris > From yannis.milios at gmail.com Sat Jul 25 19:38:56 2020 From: yannis.milios at gmail.com (Yannis Milios) Date: Sat, 25 Jul 2020 18:38:56 +0100 Subject: [PVE-User] Temporary add a node to a cluster... In-Reply-To: <20200725125208.GA24275@lilliput.linux.it> References: <20200725125208.GA24275@lilliput.linux.it> Message-ID: Are you trying to convert (P2V) a physical server into a VM and then after repurpose the same server into a standalone PVE host? If so, then definitely it won't be quick, especially without some kind of shared storage. You could potentially reduce the time needed by following these steps... - P2V your physical server into a VM at your test PVE cluster. - Shutdown original server and then start its VM version on your test cluster. Leave it running there for some days and see if everything is working properly. - Repurpose original server into a PVE host but do _not_ add it as a new cluster member on your test cluster. Just connect it to the same physical network. - Start copying the running VM onto this standalone PVE host. You could use rsync for example or zfs send/recv if you are using ZFS at both sides. - Once transfer is finished, shutdown the VM on the test cluster and then immediately do a final rsync or zfs send/receive so that the latest changes are replicated on the target server. - Create a new VM on standalone server and attach the VM disk onto it. - Start the VM on standalone server and delete it from your test cluster. Yannis On Sat, 25 Jul 2020 at 14:00, Marco Gaiarin wrote: > > I need to do a P2V 'in place' for a server, and i can use for that a > 'test cluster' (two HP Microserver) i own. > But the final result would be two different clusters. > > Also, i don't have too much space or time, so my idea is (as just done > before): > > 1) P2V the server in my test cluster > > 2) reinstall the server with pve, join to the cluster > > 3) migrate the VM to the server. > > 4) 'detach' the cluster, eg, following: > > https://pve.proxmox.com/wiki/Cluster_Manager > > delete from server my test cluster and delete from test cluster my > server. > > > But is this way the server will belong to my 'test cluster', and > clusters cannot be renamed, right? > > There's some way to 'uncluster' a server? > > > Thanks. > > -- > dott. Marco Gaiarin GNUPG Key ID: > 240A3D66 > Associazione ``La Nostra Famiglia'' > http://www.lanostrafamiglia.it/ > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento > (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f > +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- Sent from Gmail Mobile From lindsay.mathieson at gmail.com Sun Jul 26 03:08:01 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sun, 26 Jul 2020 11:08:01 +1000 Subject: [PVE-User] Can't migrate my HA VM's Message-ID: <88d992dc-5d46-f639-bd63-a721d0a51ab2@gmail.com> Have this error start today when I attempt to migrate my HA managed VM's task started by HA resource agent 2020-07-26 10:55:00 starting migration of VM 101 to node 'vnb' (192.168.5.240) 2020-07-26 10:55:02 ERROR: Failed to sync data - proxmox-backup-client failed: Error: error trying to connect: tcp connect error: No route to host (os error 113) 2020-07-26 10:55:02 aborting phase 1 - cleanup resources 2020-07-26 10:55:02 ERROR: migration aborted (duration 00:00:02): Failed to sync data - proxmox-backup-client failed: Error: error trying to connect: tcp connect error: No route to host (os error 113) TASK ERROR: migration aborted All servers are pingable from each other (static entries in host files). If I remove the VM from HA it migrates fine. Dunno when this started, but I used to be able to migrate HA managed VM's. All updates applied, non-subscription repository. Haven't tested restarting any services yet. -- Lindsay From lindsay.mathieson at gmail.com Sun Jul 26 03:14:10 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Sun, 26 Jul 2020 11:14:10 +1000 Subject: [PVE-User] Can't migrate my HA VM's In-Reply-To: <88d992dc-5d46-f639-bd63-a721d0a51ab2@gmail.com> References: <88d992dc-5d46-f639-bd63-a721d0a51ab2@gmail.com> Message-ID: <134f41a7-1c51-53ea-7144-78a73e7a709b@gmail.com> Ok, I resolved the issue, though maybe it exposes a problem? I had a test Proxmox Backup server Beta, which I took down and erased, but I left the storage entry. Once I deleted the Storage entry, the HA VM's were able to migrate. Would seem to be an issue if HA VM's can't be migrated if a backup server is down. On 26/07/2020 11:08 am, Lindsay Mathieson wrote: > > Have this error start today when I attempt to migrate my HA managed VM's > > task started by HA resource agent > 2020-07-26 10:55:00 starting migration of VM 101 to node 'vnb' > (192.168.5.240) > 2020-07-26 10:55:02 ERROR: Failed to sync data - proxmox-backup-client > failed: Error: error trying to connect: tcp connect error: No route to > host (os error 113) > 2020-07-26 10:55:02 aborting phase 1 - cleanup resources > 2020-07-26 10:55:02 ERROR: migration aborted (duration 00:00:02): > Failed to sync data - proxmox-backup-client failed: Error: error > trying to connect: tcp connect error: No route to host (os error 113) > TASK ERROR: migration aborted > > > All servers are pingable from each other (static entries in host > files). If I remove the VM from HA it migrates fine. Dunno when this > started, but I used to be able to migrate HA managed VM's. > > > All updates applied, non-subscription repository. > > > Haven't tested restarting any services yet. > > -- > Lindsay -- Lindsay From mark at tuxis.nl Sun Jul 26 11:24:34 2020 From: mark at tuxis.nl (Mark Schouten) Date: Sun, 26 Jul 2020 11:24:34 +0200 Subject: [PVE-User] PBS : is dirty-bitmap really accurate ? In-Reply-To: References: <1110267368.76036.1595436034847.JavaMail.zimbra@fws.fr> <1595486387.pi9zv7y79a.astroid@nora.none> <141180690.77175.1595487181182.JavaMail.zimbra@fws.fr> <945247964.82475.1595574150577.JavaMail.zimbra@fws.fr> Message-ID: <20200726092434.agdjhigt4qr4vkk6@shell.tuxis.net> On Fri, Jul 24, 2020 at 09:54:13AM +0200, Ronny Aasen wrote: > would mounting the disk with discard help on this ? where it only trims > blocks that are actually discarded ? instead of touching the whole disk with > fstrim ? I would expect so, yes. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | info at tuxis.nl From proxmox at elchaka.de Sun Jul 26 13:18:06 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Sun, 26 Jul 2020 13:18:06 +0200 Subject: [PVE-User] Problem with ceph and unexpected clone In-Reply-To: <1406196509.66914.1595568002770.JavaMail.zimbra@zimbra.panservice.it> References: <1406196509.66914.1595568002770.JavaMail.zimbra@zimbra.panservice.it> Message-ID: I guess you can have a Look via rbd info To find the related vm/disk Hth Mehmet Am 24. Juli 2020 07:20:02 MESZ schrieb Fabrizio Cuseo : > >Hello; a little off-topic issue (ceph issue). > >I have a test cluster with last pve, ceph, bluestore, and replica 2 >(not safe, i know). > >Due to some problems, I have an inconsistent PG and with pg repair I >have a "unexepcted clone" with one object. >I would like to identify the rbd image that uses this object, to delete >it or restore it, but I can't find how can I have this info. > >PS: i also can't delete the object from the bluestore osd, because if I >run the "rados list-inconsistent-obj" for the damaged pg, it returned >me NO objects. > >Thanks, Fabrizio > >-- >--- >Fabrizio Cuseo - mailto:f.cuseo at panservice.it >Direzione Generale - Panservice InterNetWorking >Servizi Professionali per Internet ed il Networking >Panservice e' associata AIIP - RIPE Local Registry >Phone: +39 0773 410020 - Fax: +39 0773 470219 >http://www.panservice.it mailto:info at panservice.it >Numero verde nazionale: 800 901492 > >_______________________________________________ >pve-user mailing list >pve-user at lists.proxmox.com >https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From proxmox at elchaka.de Sun Jul 26 13:18:06 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Sun, 26 Jul 2020 13:18:06 +0200 Subject: [PVE-User] Problem with ceph and unexpected clone In-Reply-To: <1406196509.66914.1595568002770.JavaMail.zimbra@zimbra.panservice.it> References: <1406196509.66914.1595568002770.JavaMail.zimbra@zimbra.panservice.it> Message-ID: I guess you can have a Look via rbd info To find the related vm/disk Hth Mehmet Am 24. Juli 2020 07:20:02 MESZ schrieb Fabrizio Cuseo : > >Hello; a little off-topic issue (ceph issue). > >I have a test cluster with last pve, ceph, bluestore, and replica 2 >(not safe, i know). > >Due to some problems, I have an inconsistent PG and with pg repair I >have a "unexepcted clone" with one object. >I would like to identify the rbd image that uses this object, to delete >it or restore it, but I can't find how can I have this info. > >PS: i also can't delete the object from the bluestore osd, because if I >run the "rados list-inconsistent-obj" for the damaged pg, it returned >me NO objects. > >Thanks, Fabrizio > >-- >--- >Fabrizio Cuseo - mailto:f.cuseo at panservice.it >Direzione Generale - Panservice InterNetWorking >Servizi Professionali per Internet ed il Networking >Panservice e' associata AIIP - RIPE Local Registry >Phone: +39 0773 410020 - Fax: +39 0773 470219 >http://www.panservice.it mailto:info at panservice.it >Numero verde nazionale: 800 901492 > >_______________________________________________ >pve-user mailing list >pve-user at lists.proxmox.com >https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user From chris.hofstaedtler at deduktiva.com Sun Jul 26 23:55:42 2020 From: chris.hofstaedtler at deduktiva.com (Chris Hofstaedtler | Deduktiva) Date: Sun, 26 Jul 2020 23:55:42 +0200 Subject: [PVE-User] FC-Luns only local devices? In-Reply-To: References: <8e7395ee-c3ab-03fc-aa2d-9bf2e0781375@hw.wewi.de> <20200630102125.nnrf3okkiwtdmojw@zeha.at> Message-ID: <20200726215542.4pg3j4zujbabcevo@percival.namespace.at> Hi, * Hermann [200725 18:05]: > Could you explain in a short sentence, why you avoid partitioning? I > remember running into difficulties with an image I had created this way > in another setup. I could not change the size when it was necessary an > had to delete and recreate everything, which was quite painful, because > I had to move around a TB of Files. I don't know about images, but we were talking about LVM PVs on top of LUNs (SCSI disks from a Linux point of view). Now, if the only thing that goes on the LUN is an LVM PV, why add an extra layer, an MBR or GPT partitioning layer? >From my experience this adds two things: 1) another size definition, which you have to edit when resizing the LUN 2) another size definition, which is also read and cached by the Linux kernel, and - at least in the past - was often impossible to get updated without closing all open handles to the block device. Effectively, this meant rebooting. Also, local staff isn't too happy to fiddle with fdisk: delete and recreate a partition to resize it. Or use parted and find the magic flags so it doesn't actually do something unexpected. Chris -- Chris Hofstaedtler / Deduktiva GmbH (FN 418592 b, HG Wien) www.deduktiva.com / +43 1 353 1707 From proxmox-pve-user-list at licomonch.net Wed Jul 29 11:31:13 2020 From: proxmox-pve-user-list at licomonch.net (proxmox-pve-user-list at licomonch.net) Date: Wed, 29 Jul 2020 11:31:13 +0200 Subject: [PVE-User] HTTPS for download.proxmox.com In-Reply-To: <1bc4ea0a-2978-4214-d75a-e2c0f1ad0104@coppint.com> References: <0701d274-de00-84e2-e8e4-e62f0ac5ee3a@coppint.com> <2026549297.36.1512041535093@webmail.proxmox.com> <1bc4ea0a-2978-4214-d75a-e2c0f1ad0104@coppint.com> Message-ID: Hi Florent, > download.proxmox.com packages are signed with key which public part can > be downloaded on... download.proxmox.com, without https ! Well done. That's what public keys are made for .. make them public .. https doesn't change that .. it's used to transport secrets .. secret like the S in HTTPS If you want to use https for validation, you're on the wrong trip. You'd have to personally check the pub key person (you) to person (proxmox key admin) to be 100% sure about the correctness of the key .. If the key is not correct and you aren't already hacked by some evil minions you'll get a failure at package validation request .. or even earlier on 'apt update' The only real gain of package/pub-key distribution via https is a felt security gain. The real security gain is minimal and more theoretical. (If someone can compromise you with changed packages _and_ a wrong repo-key then you have greater problems then that ;) ) Greeting, Andreas F. > > On 30/11/2017 12:32, Dietmar Maurer wrote: >> This is why we have an enterprise repository! Please use the enterprise >> repository >> if you want SSL. >> >>> On November 30, 2017 at 12:22 PM Florent B wrote: >>> >>> >>> Up ! >>> >>> >>> On 30/05/2017 15:21, Florent B wrote: >>>> Hi PVE team, >>>> >>>> Would it be possible to include "download.proxmox.com" in SSL >>>> certificate for accessing downloads with HTTPS. >>>> >>>> Current certificate is only valid for proxmox.com & enterprise.proxmox.com. >>>> >>>> Thank you. >>>> >>>> Florent >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.gruenbichler at proxmox.com Wed Jul 29 13:37:55 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 29 Jul 2020 13:37:55 +0200 Subject: [PVE-User] HTTPS for download.proxmox.com In-Reply-To: <1bc4ea0a-2978-4214-d75a-e2c0f1ad0104@coppint.com> References: <0701d274-de00-84e2-e8e4-e62f0ac5ee3a@coppint.com> <2026549297.36.1512041535093@webmail.proxmox.com> <1bc4ea0a-2978-4214-d75a-e2c0f1ad0104@coppint.com> Message-ID: <1596017914.krw2pdzqws.astroid@nora.none> On July 29, 2020 10:50 am, Florent B wrote: > Hi, > > In 2020, you always consider HTTPS as a privilege for paid users > (enterprise repo) ? > > download.proxmox.com packages are signed with key which public part can > be downloaded on... download.proxmox.com, without https ! Well done. https://git.proxmox.com/?p=proxmox-ve.git;a=tree;f=debian the trust anchor for regular users is the ISO, which is both available for download via HTTPS, and the checksum is also published via HTTPS.. From devzero at web.de Fri Jul 31 08:46:47 2020 From: devzero at web.de (Roland) Date: Fri, 31 Jul 2020 08:46:47 +0200 Subject: [PVE-User] move virtual disk with snapshots Message-ID: hello, can someone explain, why it's not possible in proxmox to move a VMs virtual disk with snapshots to a different datastore/dir - not even when offline? we need to fix some zfs dataset/dir naming/structure and can't as we would loose all our snapshots when moving VMs around... regards roland