[pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields

Fiona Ebner f.ebner at proxmox.com
Mon Sep 4 12:48:44 CEST 2023


Am 02.09.23 um 08:18 schrieb DERUMIER, Alexandre:
> Le vendredi 01 septembre 2023 à 12:24 +0200, Fiona Ebner a écrit :
>> Am 01.09.23 um 11:48 schrieb Thomas Lamprecht:
>>> Am 19/06/2023 um 09:28 schrieb Alexandre Derumier:
>>>> +               xtype: 'pveMemoryField',
>>>> +               name: 'max',
>>>> +               minValue: 65536,
>>>> +               maxValue: 4194304,
>>>> +               value: '',
>>>> +               step: 65536,
>>>> +               fieldLabel: gettext('Maximum memory') + ' (MiB)',
>>>
>>> This huge step size will be confusing to users, there should be a
>>> way to have
>>> smaller steps (e.g., 1 GiB or even 128 MiB).
>>>
>>> As even nowadays, with a huge amount of installed memory on a lot
>>> of servers,
>>> deciding that a (potentially bad actor) VM can use up 64G or 128G
>>> is still
>>> quite the difference on a lot of setups. Fiona is checking the
>>> backend here
>>> to see if it might be done with a finer granularity, or what other
>>> options
>>> we have here.
>>>
> 
> I was not think about max size as a security feature, but more to
> define the min dimm size to reach this max value.
> But indeed, it could be interesting.
> 

Permission-wise there would need to be a difference between changing
'current' and changing 'max', but I'd say that's something for later.

>> From a first glance, I think it should be possible. Even if we keep
>> the
>> restriction "all memory devices should have the same size", which
>> makes
>> the code easier:
>>
>> For dimms, we have 64 slots, so I don't see a reason why we can't use
>> 64
>> MiB granularity rather than 64 GiB.
>>
>>
> 
> Note that I think we shouldn't go under 128Mib for dimmsize as it's the
> minimum hotplug granularity on linux
> 
> https://docs.kernel.org/admin-guide/mm/memory-hotplug.html
> "Memory Hot(Un)Plug Granularity
> Memory hot(un)plug in Linux uses the SPARSEMEM memory model, which
> divides the physical memory address space into chunks of the same size:
> memory sections. The size of a memory section is architecture
> dependent. For example, x86_64 uses 128 MiB and ppc64 uses 16 MiB."
> 

Okay, I see. Then let's go with the "create with support for more
initially and have API deny requests bigger than max"-approach.

> Static mem is already setup at 4GB, so I really don't known if we want
> 128mib dimm size in 2023 ?  
> 
> I'm really don't have tested windows and other OS under 1 gbit dimm
> size.
> 

The current implementation starts out with adding 512 MiB-sized dimms,
so at least those should be fine.





More information about the pve-devel mailing list