[PVE-User] Proxmox with ceph storage VM performance strangeness

Alwin Antreich a.antreich at proxmox.com
Wed Apr 15 09:24:48 CEST 2020


On Tue, Apr 14, 2020 at 08:15:15PM +0200, Rainer Krienke wrote:
> Am 14.04.20 um 18:09 schrieb Alwin Antreich:
> 
> >>
> >> In a VM I also tried to read its own striped LV device: dd
> >> if=/dev/vg/testlv  of=/dev/null bs=1024k status=progress (after clearing
> >> the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on
> >> it  on which I tested the speed using bonnie++ before.
> >> This dd also did not go beyond about 100MB/sec, whereas the rados bench
> >> promises much more.
> > Do you have a VM without stripped volumes? I suppose there will be two
> > requests, for each half of the data. That could slow down the read as> well.
> 
> Yes the logical volume is striped using 4 physical volumes (RBDs). But
> since exactly this setup helped to boost up writing (more paralellism)
> it should do exactly the same since blocks can be read from more
> separate rbd devices and thus more disks in general.
> 
> I also tested a VM with just a single rbd used for the VMs disk and
> there the effect ist quite the same.
> 
> > 
> > And you can disable the cache to verify that cache misses don't impact
> > the performance.
> 
> I tried and disabled the writeback cache but the effect was only minimal.
It seems that at this point the optimizations need to be done inside the
VM (eg. readahead). I think the data that is requested is not in the
cache and to small to be done within one read operation.

--
Cheers,
Alwin




More information about the pve-user mailing list