[pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields

DERUMIER, Alexandre alexandre.derumier at groupe-cyllene.com
Sat Sep 2 08:18:22 CEST 2023


Le vendredi 01 septembre 2023 à 12:24 +0200, Fiona Ebner a écrit :
> Am 01.09.23 um 11:48 schrieb Thomas Lamprecht:
> > Am 19/06/2023 um 09:28 schrieb Alexandre Derumier:
> > > +               xtype: 'pveMemoryField',
> > > +               name: 'max',
> > > +               minValue: 65536,
> > > +               maxValue: 4194304,
> > > +               value: '',
> > > +               step: 65536,
> > > +               fieldLabel: gettext('Maximum memory') + ' (MiB)',
> > 
> > This huge step size will be confusing to users, there should be a
> > way to have
> > smaller steps (e.g., 1 GiB or even 128 MiB).
> > 
> > As even nowadays, with a huge amount of installed memory on a lot
> > of servers,
> > deciding that a (potentially bad actor) VM can use up 64G or 128G
> > is still
> > quite the difference on a lot of setups. Fiona is checking the
> > backend here
> > to see if it might be done with a finer granularity, or what other
> > options
> > we have here.
> > 

I was not think about max size as a security feature, but more to
define the min dimm size to reach this max value.
But indeed, it could be interesting.
 

The step of max should be a at minimum the dimmsize 

max > 4GB && <= 64GB : 1gb dimm size
max > 64GB && <= 128GB : 2gb dimm size
max > 128GB && <= 256GB : 4gb dimm size


and we start qemu with the real max mem of the range. (max conf >4G
<=64GB ----> qemu is started with 64GB maxmem)

Like this, user could change max value between his current max range,
without need to restart vm.


> 
> From a first glance, I think it should be possible. Even if we keep
> the
> restriction "all memory devices should have the same size", which
> makes
> the code easier:
> 
> For dimms, we have 64 slots, so I don't see a reason why we can't use
> 64
> MiB granularity rather than 64 GiB.
> 
> 

Note that I think we shouldn't go under 128Mib for dimmsize as it's the
minimum hotplug granularity on linux

https://docs.kernel.org/admin-guide/mm/memory-hotplug.html
"Memory Hot(Un)Plug Granularity
Memory hot(un)plug in Linux uses the SPARSEMEM memory model, which
divides the physical memory address space into chunks of the same size:
memory sections. The size of a memory section is architecture
dependent. For example, x86_64 uses 128 MiB and ppc64 uses 16 MiB."

Static mem is already setup at 4GB, so I really don't known if we want
128mib dimm size in 2023 ?  

I'm really don't have tested windows and other OS under 1 gbit dimm
size.


If really needed, we could add

max > 4GB && <= 8GB, 128mb dimmsize
max > 8GB && <= 16GB : 256mb dimmsize
max > 16GB && <= 32GB : 512mb dimmsize





> For virtio-mem, we have one device per socket (up to 8, assuming a
> power
> of 2), and the blocksize is 2 MiB, so we could have 16 MiB
> granularity.

Yes, it's not a problem to use 16MiB max step granulary.


> Or is there an issue setting the 'size' for a  virtio-mem-pci device
> to
> such a fine grained value? Even if there is, we can just create the
> device with a bigger supported 'size' and have our API reject a
> request
> to go beyond the maximum later.
> 

Yes, I was more for the second proposal

Like for classic mem, we could simple do something like

max > 4GB && <= 64GB, with a maxstep of 16MbiB :  

--> start qemu with maxmem=64GB , 32000 slots of 2MIB chunks,  but
don't allow user on api to add more memory than max in config.

The max memory is not allocated anyway (until you use hugepages)

This would allow user to change the max value in the current max range
without need to restart he vm.




More information about the pve-devel mailing list