[pve-devel] adding a vm workload scheduler feature

Alexandre DERUMIER aderumier at odiso.com
Tue Nov 17 11:01:58 CET 2015


> What do you think about it ?
> 
>>Sounds great, but I think memory and io-wait should be part of the list 
>>as well.
yes, sure. I just want to start with cpu, but memory could be add too.

 I'm not sure for io-wait, as migrating the vm don't change the storage ?


>>Why not keep it simple? You could extend pvestatd to save the 
>>performance numbers in a file in a specific folder in /etc/pve since 
>>pvestatd already has these numbers. Each node names this file by node 
>>name and if writing the numbers through Data::Dumper then this could be 
>>a persisted hash like 

I'm thinked to use rrd files to have stats on a long time. (maybe do average on last x minutes)

(for example we don't want to migrate a vm if the host is overload for 1 or 2 minutes,
because of a spiky cpu vm)




----- Mail original -----
De: "datanom.net" <mir at datanom.net>
À: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mardi 17 Novembre 2015 10:11:40
Objet: Re: [pve-devel] adding a vm workload scheduler feature

Hi all, 

On 2015-11-17 08:23, Alexandre DERUMIER wrote: 
> 
> 
> What do you think about it ? 
> 
Sounds great, but I think memory and io-wait should be part of the list 
as well. 

> 
> As we don't have master node, I don't known how to implement this: 
> 
> 1) each node try to migrate his own vms to another node with less cpu 
> usage. 
> maybe with a global cluster lock to not have 2 nodes migrating in 
> both way at the same time ? 
> 
How would you distinguish between operator initiated migration and 
automatic migration? 
I should think that operator initiated migration should always overrule 
automatic migration. 

> 
> 2) have some kind of master service in the cluster (maybe with 
> corosync service ?), 
> which read global stats of all nodes, and through an algorithm, do 
> the migrations. 
> 
Why not keep it simple? You could extend pvestatd to save the 
performance numbers in a file in a specific folder in /etc/pve since 
pvestatd already has these numbers. Each node names this file by node 
name and if writing the numbers through Data::Dumper then this could be 
a persisted hash like 
{ 
'cpu' => { 
'cur' => 12, 
'max' => 80 
}, 
'mem' => { 
'cur' => 28, 
'max' => 96 
}, 
'wait' => { 
'cur' => 2, 
'max' => 10 
} 
} 
cur is the reading from pvestatd and max is the configured threshold on 
each node. 

Another daemon on each node on a regular interval assembles a hash 
reading every file in the /etc/pve folder to: 
{ 
'node1' => { 
'cpu' => { 
'cur' => 12, 
'max' => 80 
}, 
'mem' => { 
'cur' => 28, 
'max' => 96 
}, 
'wait' => { 
'cur' => 2, 
'max' => 10 
} 
}, 
...... 
} 
and performs decisions according to a well defined algorithm which 
should also take into account that some VM's by configuration can be 
configured to be locked to a specific node. 

As for locking I agree that there should be some kind of global locking 
scheme. 

Just some quick thoughts. 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael <at> rasmussen <dot> cc 
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E 
mir <at> datanom <dot> net 
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C 
mir <at> miras <dot> org 
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 
-------------------------------------------------------------- 

---- 

This mail was virus scanned and spam checked before delivery. 
This mail is also DKIM signed. See header dkim-signature. 

_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



More information about the pve-devel mailing list