[pve-devel] [PATCH] memory hotplug patch v6

Alexandre DERUMIER aderumier at odiso.com
Wed Jan 21 08:33:41 CET 2015


>>Sigh, that is a problem, because each OS needs a different setting for startup 
>>memory. 
>>Using 2 different configuration would solve that: 
>>
>>memory: XXX # assigned at startup 
>>dimm_memory: XXX # hot plugable memory

Seem good !


>>Sure. But we do not need 'unplug' when we 'add' memory (as suggested by daniel). 
Yes, I didn't like that too !



>>BTW, are there systems with odd number of NUMA nodes? 
Physicaly, I don't think.

Now, virtually it's possible.
(You can have 1 socket with numa enabled)
I forget to say that for memory hotplug with windows, numa need to be enabled ! (works with 1socket)



I just look at the mapping,
at the end
   dimm250    4194304  113244160       3.70 
   dimm251    4194304  117438464       3.57 
   dimm252    4194304  121632768       3.45 
   dimm253    4194304  125827072       3.33 
   dimm254    4194304  130021376       3.23 
   dimm255    4194304  134215680       3.13 

that give us 4TB memory modules ?
for a max memory of 127TB ?

maybe it's a little bit too much ? ;)
I would like to have have more granularity

maybe:

for (my $j = 0; $j < 8; $j++) {
    for (my $i = 0; $i < 32; $i++) {

   dimm251      16384     978944       1.67 0
   dimm252      16384     995328       1.65 1
   dimm253      16384    1011712       1.62 2
   dimm254      16384    1028096       1.59 3
   dimm255      16384    1044480       1.57 0




----- Mail original -----
De: "dietmar" <dietmar at proxmox.com>
À: "aderumier" <aderumier at odiso.com>
Cc: "Daniel Hunsaker" <danhunsaker at gmail.com>, "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mercredi 21 Janvier 2015 08:19:40
Objet: Re: [pve-devel] [PATCH] memory hotplug patch v6

> I never have think at something like that. 
> It seem to solve all the problems :) 
> 
> 
> (just one little note: we need to keep some "static" memory (qemu -m X), for 
> vm start, 
> because dimm modules are not readeable until initramfs is loaded). 

Sigh, that is a problem, because each OS needs a different setting for startup 
memory. 
Using 2 different configuration would solve that: 

memory: XXX # assigned at startup 
dimm_memory: XXX # hot plugable memory 


> > I think it could work, but currently unplug is not implemented in qemu, 
> >>We do not need unplug with above fixed dimm mappings. 
> 
> I don't understand this. Why don't we need unplug ? 
> If we reduce the memory, we want to unplug dimm modules right ? 

Sure. But we do not need 'unplug' when we 'add' memory (as suggested by daniel). 

> > and also it's possible for advanced users to specify topology manually 
> > qm set vmid -dimmX size,numa=node 
> > 
> >>Why is that required? Isn't it possible to assign that automatically 
> >>(distribute 
> >>among all 
> >>numa nodes)? 
> 
> I think we can simply hotplug the dimm on the node with less memory. 
> We just need to care about live migration, and assign same memory modules on 
> same numa node. 

BTW, are there systems with odd number of NUMA nodes? 



More information about the pve-devel mailing list