[PVE-User] high cpu load on 100mbits/sec download with virtio nic

Alwin Antreich aa at ipnerd.net
Sat Jun 9 19:48:06 CEST 2018


On Fri, Jun 08, 2018 at 07:39:17AM +0000, Maxime AUGER wrote:
> Hello,
>
> Let me clarify my statement.
> GUEST CPU load is acceptable (25% of a single CPU)
> It is the cumulative load of the kvm process and the vhost thread that is high, on the HOST side
> kvm-thread-1=30%
> kvm-thred-2=30%
> vhost(net)=10%
> 70% CPU without i/o disk (-O /dev/null)
>
> it is x10 the load observed with the same conditions under old Vmware ESXi (same GUEST system and wget process)
> I think it is a kvm issue, but I'm curious about proxmox's positioning in relation to this performance level
>
>
> -----Message d'origine-----
> De : pve-user [mailto:pve-user-bounces at pve.proxmox.com] De la part de Josh Knight
> Envoyé : jeudi 7 juin 2018 20:47
> À : PVE User List <pve-user at pve.proxmox.com>
> Objet : Re: [PVE-User] high cpu load on 100mbits/sec download with virtio nic
>
> I'm not convinced this is a proxmox issue, or even an issue to begin with.
>
> I'm running proxmox 5.1-49, in my Linux 4.1 guest when I run wget -O /dev/null <https to ~1.2GB iso> I'm seeing ~30% according to top as well.
>
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
> 12788 root      20   0   65512   7616   5988 S  31.3  0.1   0:03.00 wget
>
>
> Even on a physical box running Ubuntu I'm getting around 20-30% or more.
>
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
> 36070 root      20   0   32888   4028   3576 S  37.6  0.0   0:03.15 wget
>
>
> This could be an issue/quirk with the way top is calculating cpu usage, or it's just a process using available cpu as normal. I couldn't reproduce high load using ping -f from a remote host to my VM, and iotop confirmed that -O /dev/null wasn't somehow writing to disk.  I was able to lower the CPU usage by running wget with --limit-rate=.
>
> Related, I would not recommend macvtap if you're running routers.  At least running on an Ubuntu 16.04 host, under load of ~1Gb we were seeing anywhere from 0.01% to 0.16% packet loss before it reached the guest's virtio interface.  We switched to linux bridge and then finally to openvswitch with the default switching config (no custom openflow rules.)
>
>
> Josh Knight
>
I guess, this could be well related to the Meltdown/Spectre mitigation.
At least it would fit to the "old" ESXi showing a different performance
hit.

Try to set the CPU type to host or activate PCID/Spectre-ctrl if
applicable.
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu

--
Cheers,
Alwin



More information about the pve-user mailing list