[PVE-User] Proxmox with ceph storage VM performance strangeness
Alwin Antreich
a.antreich at proxmox.com
Tue Mar 17 19:13:04 CET 2020
On Tue, Mar 17, 2020 at 05:07:47PM +0100, Rainer Krienke wrote:
> Hello Alwin,
>
> thank you for your reply.
>
> The test VMs config is this one. It only has the system disk as well a
> disk I added for my test writing on the device with dd:
>
> agent: 1
> bootdisk: scsi0
> cores: 2
> cpu: kvm64
If possible, set host as CPU type. Exposes all extension of the CPU
model to the VM. But you will need the same CPU model on all the nodes.
Otherwise try to find a model with a common set of features.
> ide2: none,media=cdrom
> memory: 4096
With more memory for the VM, you could also tune the caching inside the
VM.
> name: pxaclient1
> net0: virtio=52:24:28:e9:18:24,bridge=vmbr1,firewall=1
> numa: 0
> ostype: l26
> scsi0: ceph:vm-100-disk-0,size=32G
> scsi1: ceph:vm-100-disk-1,size=500G
Use cache=writeback, Qemu caching modes translate to the Ceph cache.
With writeback, Ceph activates the librbd caching (default 25 MB).
> scsihw: virtio-scsi-pci
> serial0: socket
> smbios1: uuid=c57eb716-8188-485b-89cb-35d41dbf3fc1
> sockets: 2
If it is a NUMA system, then best activate also the NUMA flag, as KVM
tries to run the two threads (cores) on the same node.
>
>
> This is as said only a test machine. As I already wrote to Enko, I have
> some server VMs where I could parallelize IO by using striped LVs at the
> moment these LVs are not striped. But of course it would also help if
> for the long run there was a way to lift the "one" disk IO bottleneck.
Yes, I have seen. But this will make backups and managing the disks
harder.
--
Cheers,
Alwin
More information about the pve-user
mailing list