[pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

Fiona Ebner f.ebner at proxmox.com
Fri Jun 2 09:28:39 CEST 2023


Am 01.06.23 um 23:15 schrieb DERUMIER, Alexandre:
> Hi,
> I found an interesting thread on the forum about kvm_pv_unhalt
> 
> https://forum.proxmox.com/threads/live-migration-between-intel-xeon-and-amd-epyc2-linux-guests.68663/
> 
> 
> Sounds good. Please also take a look at the default flag
> "kvm_pv_unhalt". As I mentioned, it would cause a kernel crash in
> paravirtualized unhalt code sooner or later in a migrated VM (started
> on Intel, migrated to AMD).
> 
> Please note that according to our tests simply leaving the CPU type
> empty in the GUI (leading to the qemu command line argument of -cpu
> kvm64,+sep,+lahf_lm,+kvm_pv_unhalt,+kvm_pv_eoi,enforce), while
> seemingly working at first, will after some load and idle time in the
> VM result in a crash involving kvm_kick_cpu function somewhere inside
> of the paravirtualized halt/unhalt code. Linux kernels tested ranged
> from Debian's 4.9.210-1 to Ubuntu's 5.3.0-46 (and some in between).
> Therefore the Proxmox default seems to be unsafe and apparently the
> very minimum working command line probably would be args: -cpu
> kvm64,+sep,+lahf_lm,+kvm_pv_eoi.
> 
> 
> 
> 
> So,it sound like it's crash if it's defined with a cpu vendor not
> matching the real hardware ?
> 
> as it's break migration between intel && amd, maybe we shouldn't add
> it to the new x86-64-vx model ?
> 
> 
> 
> a discussion on qemu-devel mailing is talking about performance
> with/witout it
> https://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg01816.html
> 
> and it's seem to help when you have a lot of cores/numa nodes in guest,
> but can slowdown small vms. 
> 

Note that migration between CPUs of different vendors is not a supported
use case (it will always depend on specific models, kernel versions,
etc.), so we can only justify not adding it to the new default model if
it doesn't make life worse for everybody else.

And I'd be a bit careful to jump to general conclusions just from one
forum post.

It seems like you were the one adding the flag ;)

https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=117a041466b3af8368506ae3ab7b8d26fc07d9b7

and the LWN-archived mail linked in the commit message says

> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).

"catastrophic performance collapses" doesn't sound very promising :/

But if we find that
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3
causes issues (even if not cross-vendor live-migrating) with the
+kvm_pv_unhalt flag, but not without, it would be a much more convincing
reason against adding the flag for the new default.





More information about the pve-devel mailing list