[pve-devel] little iops with PVE kernel than vanilla 3.5
Martin Maurer
martin at proxmox.com
Sun Jul 22 09:47:12 CEST 2012
> -----Original Message-----
> From: Stefan Priebe [mailto:s.priebe at profihost.ag]
> Sent: Samstag, 21. Juli 2012 22:39
> To: Martin Maurer
> Cc: pve-devel at pve.proxmox.com
> Subject: Re: [pve-devel] little iops with PVE kernel than vanilla 3.5
>
> Am 21.07.2012 22:05, schrieb Martin Maurer:
> >> -----Original Message-----
> >> From: pve-devel-bounces at pve.proxmox.com [mailto:pve-devel-
> >> bounces at pve.proxmox.com] On Behalf Of Stefan Priebe
> >> Sent: Samstag, 21. Juli 2012 21:24
> >> To: pve-devel at pve.proxmox.com
> >> Subject: [pve-devel] little iops with PVE kernel than vanilla 3.5
> >>
> >> Hello list,
> >>
> >> i'm still trying to tune my Proxmox environment.
> >>
> >> On my KVM Host i get constant 100.000 4k random iops with vanilla 3.5
> >> and latest PVE kernel - so no difference.
> >>
> >> But inside VM using LVM block device with virt io:
> >>
> >> PVE Kernel:
> >> 1 VM: 15k iops
> >> 2 VM: 2x10k iops
> >>
> >> 3.5 vanilla Kernel:
> >> 1 VM: 60k iops
> >> 2 VM: 2x30k iops
> >>
> >> Anything we can do about that?
> >
> > Give all details (Hardware, Software, used benchmark command) and I will
> try to reproduce it in the lab.
>
> thanks
>
> Storagesystem:
> Raid 0 of 16 Intel 520 series SSD
> exported via LIO iSCSI
> 10GBE Network
> Debian Squeeze
> Kernel 3.5-rc7
> Single Xeon E5 1620
>
> Proxmoxsystem:
> Proxmox 2.1 from iso with latest git packages open-iscsi 2.0-873 with 4 sessions
> and multipath Dual Xeon E5-2640 LVM on top of iscsi block device
>
> disks set to noop scheduler.
>
> Benchmark command:
> fio --filename=$DISK --direct=1 --rw=randwrite --bs=4k --size=200G
> --numjobs=50 --runtime=90 --group_reporting --name=file1
>
> Anything else you need?
Yes, I would like to have such a hardware (storagesystem and 10gbit network) in our test lab. Currently our test lab can´t compete with yours.
Martin
More information about the pve-devel
mailing list