[pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

DERUMIER, Alexandre alexandre.derumier at groupe-cyllene.com
Thu Jun 1 23:15:42 CEST 2023


Hi,
I found an interesting thread on the forum about kvm_pv_unhalt

https://forum.proxmox.com/threads/live-migration-between-intel-xeon-and-amd-epyc2-linux-guests.68663/


Sounds good. Please also take a look at the default flag
"kvm_pv_unhalt". As I mentioned, it would cause a kernel crash in
paravirtualized unhalt code sooner or later in a migrated VM (started
on Intel, migrated to AMD).

Please note that according to our tests simply leaving the CPU type
empty in the GUI (leading to the qemu command line argument of -cpu
kvm64,+sep,+lahf_lm,+kvm_pv_unhalt,+kvm_pv_eoi,enforce), while
seemingly working at first, will after some load and idle time in the
VM result in a crash involving kvm_kick_cpu function somewhere inside
of the paravirtualized halt/unhalt code. Linux kernels tested ranged
from Debian's 4.9.210-1 to Ubuntu's 5.3.0-46 (and some in between).
Therefore the Proxmox default seems to be unsafe and apparently the
very minimum working command line probably would be args: -cpu
kvm64,+sep,+lahf_lm,+kvm_pv_eoi.




So,it sound like it's crash if it's defined with a cpu vendor not
matching the real hardware ?

as it's break migration between intel && amd, maybe we shouldn't add
it to the new x86-64-vx model ?



a discussion on qemu-devel mailing is talking about performance
with/witout it
https://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg01816.html

and it's seem to help when you have a lot of cores/numa nodes in guest,
but can slowdown small vms. 



More information about the pve-devel mailing list