[PVE-User] Proxmox with ceph storage VM performance strangeness
a.antreich at proxmox.com
Tue Apr 14 18:09:00 CEST 2020
On Tue, Apr 14, 2020 at 05:21:44PM +0200, Rainer Krienke wrote:
> Am 14.04.20 um 16:42 schrieb Alwin Antreich:
> >> According to these numbers the relation from write and read performance
> >> should be the other way round: writes should be slower than reads, but
> >> on a VM its exactly the other way round?
> > Ceph does reads in parallel, while writes are done to the primary OSD by
> > the client. And that OSD is responsible for distributing the other
> > copies.
> Ah yes right. The primary OSD has to wait until all the OSDs in the pg
> have confirmed that data has been written to each of the OSDs. Reads as
> you said are parallel so I would expect reading to be faster than
> writing, but for me it is *not* in a proxmox VM with ceph rbd storage.
> However reads are faster on a ceph level, in a rados bench directly on a
> pxa host (no VM) which is what I would expect also for reads/writes
> inside a VM.
> >> Any idea why nevertheless writes on a VM are ~3 times faster then reads
> >> and what I could try to speed up reading?
> > What is the byte size of bonnie++? If it uses 4 KB and data isn't in the
> > cache, whole objects need to be requested from the cluster.
> I did not find information about blocksizes used. The whole file that is
> written and later on read again by bonnie++ is however by default at
> least twice the size of your RAM.
According to the man page the chunk size is 8192 bytes by default.
> In a VM I also tried to read its own striped LV device: dd
> if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing
> the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on
> it on which I tested the speed using bonnie++ before.
> This dd also did not go beyond about 100MB/sec, whereas the rados bench
> promises much more.
Do you have a VM without stripped volumes? I suppose there will be two
requests, for each half of the data. That could slow down the read as
And you can disable the cache to verify that cache misses don't impact
More information about the pve-user