[PVE-User] Proxmox with ceph storage VM performance strangeness

Alexandre DERUMIER aderumier at odiso.com
Tue Mar 17 19:32:52 CET 2020


>>What rates do you find on your proxmox/ceph cluster for single VMs?

with replicat x3 and 4k block random read/write with big queue depth, I'm around 70000iops read && 40000iops write

(by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core by disk)


with queue depth=1, I'm around 4000-5000 iops. (because of network latency + cpu latency).

This is with client/server with 3ghz intel cpu.



----- Mail original -----
De: "Rainer Krienke" <krienke at uni-koblenz.de>
À: "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Mardi 17 Mars 2020 14:04:22
Objet: [PVE-User] Proxmox with ceph storage VM performance strangeness

Hello, 

I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb 
Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic 
disks. The pool with rbd images for disk storage is erasure coded with a 
4+2 profile. 

I ran some performance tests since I noticed that there seems to be a 
strange limit to the disk read/write rate on a single VM even if the 
physical machine hosting the VM as well as cluster is in total capable 
of doing much more. 

So what I did was to run a bonnie++ as well as a dd read/write test 
first in parallel on 10 VMs, then on 5 VMs and at last on a single one. 

A value of "75" for "bo++rd" in the first line below means that each of 
the 10 bonnie++-processes running on 10 different proxmox VMs in 
parallel reported in average over all the results a value of 
75MBytes/sec for "block read". The ceph-values are the peaks measured by 
ceph itself during the test run (all rd/wr values in MBytes/sec): 

VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): 
10 75 42 540/485 55 58 698/711 
5 90 62 310/338 47 80 248/421 
1 108 114 111/120 130 145 337/165 


What I find a little strange is that running many VMs doing IO in 
parallel I reach a write rate of about 485-711 MBytes/sec. However when 
running a single VM the maximum is at 120-165 MBytes/sec. Since the 
whole networking is based on a 10GB infrastructure and an iperf test 
between a VM and a ceph node reported nearby 10Gb I would expect a 
higher rate for the single VM. Even if I run a test with 5 VMs on *one* 
physical host (values not shown above), the results are not far behind 
the values for 5 VMs on 5 hosts shown above. So the single host seems 
not to be the limiting factor, but the VM itself is limiting IO. 

What rates do you find on your proxmox/ceph cluster for single VMs? 
Does any one have any explanation for this rather big difference or 
perhaps an idea what to try in order to get higher IO-rates from a 
single VM? 

Thank you very much in advance 
Rainer 



--------------------------------------------- 
Here are the more detailed test results for anyone interested: 

Using bonnie++: 
10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; 
bonnie++ -u root 
Average for each VM: 
block write: ~42MByte/sec, block read: ~75MByte/sec 
ceph: total peak: 485MByte/sec write, 540MByte/sec read 

5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u 
root 
Average for each VM: 
block write: ~62MByte/sec, block read: ~90MByte/sec 
ceph: total peak: 338MByte/sec write, 310MByte/sec read 

1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root 
Average for VM: 
block write: ~114 MByte/sec, block read: ~108MByte/sec 
ceph: total peak: 120 MByte/sec write, 111MByte/sec read 


Using dd: 
10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based 
vm-disk "sdb" (rbd) 
write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress 
Average for each VM: 
bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec 
bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec 
ceph: total peak: 711MByte/sec write, 698 MByte/sec read 

5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based 
vm-disk "sdb" (rbd) 
write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress 
Average for each VM: 
bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec 
ceph: total peak: 421MByte/sec write, 248 MByte/sec read 

1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) 
write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress 
Average for each VM: 
bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec 
ceph: total peak: 165 MByte/sec write, 337 MByte/sec read 
-- 
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 
1001312 
_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 




More information about the pve-user mailing list