[PVE-User] Proxmox with ceph storage VM performance strangeness
krienke at uni-koblenz.de
Tue Apr 14 17:21:44 CEST 2020
Am 14.04.20 um 16:42 schrieb Alwin Antreich:
>> According to these numbers the relation from write and read performance
>> should be the other way round: writes should be slower than reads, but
>> on a VM its exactly the other way round?
> Ceph does reads in parallel, while writes are done to the primary OSD by
> the client. And that OSD is responsible for distributing the other
Ah yes right. The primary OSD has to wait until all the OSDs in the pg
have confirmed that data has been written to each of the OSDs. Reads as
you said are parallel so I would expect reading to be faster than
writing, but for me it is *not* in a proxmox VM with ceph rbd storage.
However reads are faster on a ceph level, in a rados bench directly on a
pxa host (no VM) which is what I would expect also for reads/writes
inside a VM.
>> Any idea why nevertheless writes on a VM are ~3 times faster then reads
>> and what I could try to speed up reading?
> What is the byte size of bonnie++? If it uses 4 KB and data isn't in the
> cache, whole objects need to be requested from the cluster.
I did not find information about blocksizes used. The whole file that is
written and later on read again by bonnie++ is however by default at
least twice the size of your RAM.
In a VM I also tried to read its own striped LV device: dd
if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing
the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on
it on which I tested the speed using bonnie++ before.
This dd also did not go beyond about 100MB/sec, whereas the rados bench
promises much more.
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287
More information about the pve-user