[pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel

Eneko Lacunza elacunza at binovo.es
Mon Jun 5 17:20:06 CEST 2023


I'm sorry I could only test for Ryzen 1700, 2600X and 5950X - our 3700X 
is offline, pending some upgrades. I hope it will be back again in some 

Tested installation of Debian 11.1.0 ISO with GUI installer upto first 
boot to GUI login to installed system:

El 1/6/23 a las 18:00, Fiona Ebner escribió:
>> qm set <ID> -args '-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3'

This was good for all 1700, 2600X and 5950X.

> If you like you can also test
>> qm set <ID> -args '-cpu Nehalem,enforce,+aes,-svm,-vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"'
This was good for 1700, but I suspect it may hang later, will check 

2600X: install was good, but after booting to GUI login screen, it froze 
with ~50% CPU use. A reset booted well, no hang for now.
5950X hung during installation, no CPU use. Reset + reinstall worked OK

3 VMs are left running in login screen to check tomorrow.

Versions in all nodes are the same:

# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.9-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-6.2: 7.4-1
pve-kernel-5.15: 7.4-1
pve-kernel-5.13: 7.1-9
pve-kernel-6.2.9-1-pve: 6.2.9-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph: 16.2.11-pve1
ceph-fuse: 16.2.11-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Sample .conf:

args: -cpu 
boot: order=scsi0;ide2;net0
cores: 2
memory: 2048
meta: creation-qemu=7.2.0,ctime=1685975783
name: test-debian11-2600x
net0: virtio=DE:29:78:74:12:C6,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: proxmox_r3_ssd:vm-128-disk-0,discard=on,iothread=1,size=10G
scsihw: virtio-scsi-single
smbios1: uuid=4942d35e-3853-4db2-b214-8492e5f4241a
sockets: 1
vmgenid: f5b16b3b-ed28-4f39-bc99-1e5f754728fd

> After testing use
>> qm set <ID> --delete args
> to get rid of the modification again.
> Make sure to stop/start the VM fresh after each modification.
> As for what to test, installing Debian 11 would be nice just for
> comparison, but other than that, just do what you like, shouldn't really
> matter too much :)

Experienced hangs with second config have reminded me of hangs we got 
when migrating Intel<->AMD, as they happened some time after the online 
migration was complete...


Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun


More information about the pve-devel mailing list