[pve-devel] [PATCH] add numa options

Alexandre DERUMIER aderumier at odiso.com
Wed Dec 3 00:17:52 CET 2014


Ok,

Finally I found the last pieces of the puzzle:

to have autonuma balancing, we just need:

2sockes-2cores-2gb ram

-object memory-backend-ram,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0 
-object memory-backend-ram,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1  

Like this, the host kernel will try to balance the numa node.
This command line works if the host don't support numa.



now if we want to bind guest numa node to specific host numa node,

-object memory-backend-ram,size=1024M,id=ram-node0,host-nodes=0,policy=preferred  
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0 
-object memory-backend-ram,size=1024M,id=ram-node1,host-nodes=1,policy=bind \ 
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1  

This require that host-nodes=X exist on the physical host
and need also the qemu-kvm --enable-numa flag



So, 
I think we could add:

numa:0|1.

which generate the first config, create 1numa node by socket, and share the ram across the the nodes



and also,for advanced users which need manual pinning:


numa0:cpus=<X-X>,memory=<mb>,hostnode=<X-X>,policy="bind|preferred|....)
numa1:...



what do you think about it ?




BTW, about pc-dimm hotplug, it's possible to add nume nodeid in "device_add pc-dimm,node=X"


----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Mardi 2 Décembre 2014 20:25:51 
Objet: Re: [pve-devel] [PATCH] add numa options 

>>shared? That looks strange to me. 
I mean split across the both nodes. 


I have check a little libvirt, 
and I'm not sure, but I think that memory-backend-ram is optionnal, to have autonuma. 

It's more about cpu pinning/memory pinning on selected host node 

Here an example for libvirt: 
http://www.redhat.com/archives/libvir-list/2014-July/msg00715.html 
"qemu: pass numa node binding preferences to qemu" 

+-object memory-backend-ram,size=20M,id=ram-node0,host-nodes=3,policy=preferred \ 
+-numa node,nodeid=0,cpus=0,memdev=ram-node0 \ 
+-object memory-backend-ram,size=645M,id=ram-node1,host-nodes=0-7,policy=bind \ 
+-numa node,nodeid=1,cpus=1-27,cpus=29,memdev=ram-node1 \ 
+-object memory-backend-ram,size=23440M,id=ram-node2,\ 
+host-nodes=1-2,host-nodes=5,host-nodes=7,policy=bind \ 
+-numa node,nodeid=2,cpus=28,cpus=30-31,memdev=ram-node2 \ 

----- Mail original ----- 

De: "Dietmar Maurer" <dietmar at proxmox.com> 
À: "Alexandre DERUMIER" <aderumier at odiso.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Mardi 2 Décembre 2014 19:42:45 
Objet: RE: [pve-devel] [PATCH] add numa options 

> "When do memory hotplug, if there is numa node, we should add the memory 
> size to the corresponding node memory size. 
> 
> For now, it mainly affects the result of hmp command "info numa"." 
> 
> 
> So, it's seem to be done automaticaly. 
> Not sure on which node is assigne the pc-dimm, but maybe the free slots are 
> shared at start between the numa nodes. 

shared? That looks strange to me. 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



More information about the pve-devel mailing list