[PVE-User] proxmox-restore - performance issues
Gregor Burck
gregor at aeppelbroe.de
Fri Oct 1 08:52:13 CEST 2021
Hi,
thank you for reply. I made a lot of different tests and setups, but
this the setup I want to use:
Original setup:
HP DL380 Gen9 with
E5-2640 v3 @ 2.60GHz
256 GB RAM
2x SSDs for host OS
For an ZFS Rais 10:
2x 1TB SAMSUNG NVME PM983 for spezial devices
12x 8 TB HP SAS HDDs
root at ph-pbs:~# zpool status
pool: ZFSPOOL
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ZFSPOOL ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
special
mirror-6 ONLINE 0 0 0
nvme0n1 ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:02:40 with 0 errors on Sun Aug 8
00:26:43 2021
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
The VMSTORE and the BACKUPSTORE is on the zsf as an dataset:
root at ph-pbs:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ZFSPOOL 10.1T 32.1T 96K /ZFSPOOL
ZFSPOOL/BACKUPSTORE001 5.63T 32.1T 5.63T /ZFSPOOL/BACKUPSTORE001
ZFSPOOL/VMSTORE001 4.52T 32.1T 4.52T /ZFSPOOL/VMSTORE001
rpool 27.3G 80.2G 96K /rpool
rpool/ROOT 27.3G 80.2G 96K /rpool/ROOT
rpool/ROOT/pbs-1 27.3G 80.2G 27.3G /
The VM I tested with is our Exchange Server. Raw image size 500GB,
netto ~400GB content
First Test with one restore job:
Virtual
Environment 7.0-11
Datacenter
Search:
Logs
new
volume ID is 'VMSTORE:vm-101-disk-0'
restore
proxmox backup image: /usr/bin/pbs-restore --repository
root at pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP
vm/121/2021-07-23T19:00:03Z drive-virtio0.img.fidx
/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0 --verbose --format raw
--skip-zero
connecting
to repository 'root at pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP'
open
block backend for target '/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0'
starting
to restore snapshot 'vm/121/2021-07-23T19:00:03Z'
download
and verify backup index
progress
1% (read 5368709120 bytes, zeroes = 2% (125829120 bytes), duration 86
sec)
progress
2% (read 10737418240 bytes, zeroes = 1% (159383552 bytes), duration
181 sec)
progress
3% (read 16106127360 bytes, zeroes = 0% (159383552 bytes), duration
270 sec)
.
.
progress
98% (read 526133493760 bytes, zeroes = 0% (3628072960 bytes),
duration 9492 sec)
progress
99% (read 531502202880 bytes, zeroes = 0% (3628072960 bytes),
duration 9583 sec)
progress
100% (read 536870912000 bytes, zeroes = 0% (3628072960 bytes),
duration 9676 sec)
restore
image complete (bytes=536870912000, duration=9676.97s,
speed=52.91MB/s)
rescan
volumes...
TASK
OK
When I regard iotop I see about the same rate.
But when I start multiple restore jobs parallel, I see that the single
jon is still on IO 40-50 MB/s but the total IO is multiple of the
rate. I see on iotop rates to 200-250 MB/s
So I guess it isn't the store. In some Test with an Setup where I used
the nvmes as source and target I could reach a singele restore rate
about 70 MB/s
Now I test an other CPU in this machine, cause on other test machines
with other CPU (AMD Ryzen or others) I get an higher rate.
Unfortunaly the rate on the current machine doesn't rise with the other CPU.
Now I confused if there is any chance to get the restore rate better.
Bye
Gregor
More information about the pve-user
mailing list