[PVE-User] Hotplug Memory and default Linux kernel parameters

Anderson, Stuart B. sba at caltech.edu
Sun Aug 27 00:41:45 CEST 2023


Enabling PVE Hotplug Memory for a Linux Guest (tested with PVE 7/8 and EL8/9) results in default kernel parameters that are orders of magnitude smaller than without Hotplug. It appears that the Kernel is mistakenly setting defaults as if the guest has only 1GB of memory. Does anyone know how to get the same kernel defaults with Hotplug Memory enabled as for disabled? Is this a bug in PVE, QEMU, or the way Linux queries QEMU?


For example, a PVE7/EL8 VM with 32GB of Hotplug Memory has a very small value of Max processes:

[root at ldas-pcdev4 ~]# grep processes /proc/$(pgrep systemd-logind)/limits
Max processes             2654                 2654                 processes 

compared to disabling Hotplug Memory:

[root at condor-f1 ~]# grep processes /proc/$(pgrep systemd-logind)/limits
Max processes             127390               127390               processes 


Presumably this is due to the following memory layout as seen by the kernel,

#
# With Hotplug Memory: 1 bank with a 1GB DIMM
#
[root at ldas-pcdev4 ~]# lsmem
RANGE                                 SIZE  STATE REMOVABLE     BLOCK
0x0000000000000000-0x000000003fffffff   1G online       yes       0-7
0x0000010000000000-0x00000107bfffffff  31G online       yes 8192-8439

Memory block size:       128M
Total online memory:      32G
Total offline memory:      0B

[root at ldas-pcdev4 ~]# lshw -class memory
  *-firmware                        description: BIOS
       vendor: SeaBIOS
       physical id: 0
       version: rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org
       date: 04/01/2014
       size: 96KiB
  *-memory
       description: System Memory
       physical id: 1000
       size: 32GiB
       capabilities: ecc
       configuration: errordetection=multi-bit-ecc
     *-bank
          description: DIMM RAM
          vendor: QEMU
          physical id: 0
          slot: DIMM 0
          size: 1GiB


#
# Without Hotplug Memory: 2 banks of of 16GB DIMM
#
[root at condor-f1 ~]# lsmem
RANGE                                 SIZE  STATE REMOVABLE  BLOCK
0x0000000000000000-0x00000000bfffffff   3G online       yes   0-23
0x0000000100000000-0x000000083fffffff  29G online       yes 32-263

Memory block size:       128M
Total online memory:      32G
Total offline memory:      0B

[root at condor-f1 ~]# lshw -class memory
  *-firmware                        description: BIOS
       vendor: SeaBIOS
       physical id: 0
       version: rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org
       date: 04/01/2014
       size: 96KiB
  *-memory
       description: System Memory
       physical id: 1000
       size: 32GiB
       capabilities: ecc
       configuration: errordetection=multi-bit-ecc
     *-bank:0
          description: DIMM RAM
          vendor: QEMU
          physical id: 0
          slot: DIMM 0
          size: 16GiB
     *-bank:1
          description: DIMM RAM
          vendor: QEMU
          physical id: 1
          slot: DIMM 1
          size: 16GiB



P.S. Unfornately, this isn't fixed with PVE8 (with a newer QEMU) or updating to a newer EL9 kernel. Here is PVE8/EL9 VM with 233GB of Hotplug Memory showing the same problematic small value: 

[root at pcdev15 ~]# cat /etc/redhat-release 
Rocky Linux release 9.2 (Blue Onyx)

[root at pcdev15 ~]# grep processes /proc/$(pgrep systemd-logind)/limits
Max processes             2659                 2659                 processes


--
Stuart Anderson
sba at caltech.edu






More information about the pve-user mailing list