[PVE-User] Proxmox with ceph storage VM performance strangeness

Rainer Krienke krienke at uni-koblenz.de
Tue Mar 17 17:07:47 CET 2020


Hello Alwin,

thank you for your reply.

The test VMs config is this one. It only has the system disk as well a
disk I added for my test writing on the device with dd:

agent: 1
bootdisk: scsi0
cores: 2
cpu: kvm64
ide2: none,media=cdrom
memory: 4096
name: pxaclient1
net0: virtio=52:24:28:e9:18:24,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsi0: ceph:vm-100-disk-0,size=32G
scsi1: ceph:vm-100-disk-1,size=500G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=c57eb716-8188-485b-89cb-35d41dbf3fc1
sockets: 2


This is as said only a test machine. As I already wrote to Enko, I have
some server VMs where I could parallelize IO by using striped LVs at the
moment these LVs are not striped. But of course it would also help if
for the long run there was a way to lift the "one" disk IO bottleneck.

Thank you very much
Rainer

Am 17.03.20 um 15:26 schrieb Alwin Antreich:
> Hallo Rainer,
> 
> On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote:
>> Hello,
>>
>> I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb
>> Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic
>> disks. The pool with rbd images for disk storage is erasure coded with a
>> 4+2 profile.
>>
>> I ran some performance tests since I noticed that there seems to be a
>> strange limit to the disk read/write rate on a single VM even if the
>> physical machine hosting the VM as well as cluster is in total capable
>> of doing much more.
>>
>> So what I did was to run a bonnie++ as well as a dd read/write test
>> first in parallel on 10 VMs, then on 5 VMs and at last on a single one.
>>
>> A value of "75" for "bo++rd" in the first line below means that each of
>> the 10 bonnie++-processes running on 10 different proxmox VMs in
>> parallel reported in average over all the results a value of
>> 75MBytes/sec for "block read". The ceph-values are the peaks measured by
>> ceph itself during the test run (all rd/wr values in MBytes/sec):
>>
>> VM-count:  bo++rd: bo++wr: ceph(rd/wr):  dd-rd:  dd-wr:  ceph(rd/wr):
>> 10           75      42      540/485       55     58      698/711
>>  5           90      62      310/338       47     80      248/421
>>  1          108     114      111/120      130    145      337/165
>>
>>

-- 
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,     Fax: +49261287
1001312



More information about the pve-user mailing list