<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2016-03-17 14:04 GMT+01:00 Thomas Lamprecht <span dir="ltr"><<a href="mailto:t.lamprecht@proxmox.com" target="_blank">t.lamprecht@proxmox.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry wrote that other mail from mobile and as it seems it should search<br>
another mail client, the current one doesn't know to quote.<br>
<br>
I know that it's not ideal behaviour all the time but we hadn't any good<br>
idea how to solve that (with a small simple nice patch) for the HA stack.<br>
But limiting (or expanding) the max_workers settings to match your setup<br>
is always a good idea so its a reasonable "workaround".<br></blockquote><div><br></div><div>I tested with the max_workers set to 1 in my datacenter.cfg and now the migrateall task still says taks OK fairly quickly but the actual migrations now take place 1 at a time which is fine by me :)</div><div><br></div><div>I also noticed the max_workers setting is not configurable in the GUI (Under Datacenter->Options) would that be a wanted feature i could implement and create a patch for?</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
btw. I picked your patch series up in my queue and will send a pull<br>
request, did you already signed and sent our CLA? - as mentioned<br>
previously this is needed to protect you and us legal wise, before that<br>
isn't done we cannot include those patches, I'm afraid.<br></blockquote><div><br></div><div>Yes, i signed and mailed the CLA 3 days ago and received a confirmation from your office.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If you already did then I'll send it tomorrow.<br></blockquote><div><br></div><div>Thanks!</div><div><br></div><div>Caspar </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
cheers,<br>
Thomas<br>
<span class=""><br>
On 17.03.2016 13:44, Caspar Smit wrote:<br>
> Thomas,<br>
><br>
> Ahh i see, thank you for clarifying!<br>
><br>
> Caspar<br>
><br>
> 2016-03-17 13:27 GMT+01:00 Thomas Lamprecht <<a href="mailto:t.lamprecht@proxmox.com">t.lamprecht@proxmox.com</a><br>
</span>> <mailto:<a href="mailto:t.lamprecht@proxmox.com">t.lamprecht@proxmox.com</a>>>:<br>
<span class="">><br>
> Comments inline.<br>
><br>
> ----- Rispondi al messaggio -----<br>
> Da: "Caspar Smit" <<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a><br>
</span>> <mailto:<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a>>><br>
<span class="">> A: "PVE development discussion" <<a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a><br>
</span>> <mailto:<a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a>>><br>
<div><div class="h5">> Oggetto: [pve-devel] [PATCH v2 3/3] Allow migrate-all button on HA<br>
> enabled VMs<br>
> Data: gio, mar 17, 2016 11:55<br>
><br>
> Hi all,<br>
><br>
> During some more tests with this feature i (maybe) stumbled on a bug<br>
> (or maybe this was by design).<br>
><br>
> When I select the migrate-all button and set the "parallel jobs"<br>
> option to 1 i noticed the HA managed VM's were migrated at the same<br>
> time (so it looks like the parallel jobs option is ignored).<br>
> But i found out why this is:<br>
><br>
> When a HA managed VM is migrated a "HA <vmid> - Migrate" task is<br>
> spawned. This task returns an OK status way BEFORE the actual<br>
> migration has taken place. The "HA <vmid> - Migrate" task spawns<br>
> another task which does the actual migration called "VM <vmid> -<br>
> Migrate".<br>
><br>
> Now I remember from PVE 3.4 that the "HA <vmid> - Migrate" task did<br>
> not return an OK until the actual "VM <vmid> - Migrate" returned an<br>
> OK. Was this changed on purpose or is this a bug?<br>
><br>
><br>
><br>
> This is by design. The HA stack consists out of the local resource<br>
> manager and the Cluster resource mamager which work synced with each<br>
> other but async from the cluster.<br>
><br>
> You can limit the concurrent migrations by setting the max_worker<br>
> setting in datacenter.cfg<br>
> Users should limit that if there setup cannot handle that much<br>
> migrations parallel.<br>
><br>
><br>
><br>
> The result here is that the migrate-all task receives an OK (from<br>
> the HA task) and starts the next migration resulting in multiple HA<br>
> migrations happen at once.<br>
><br>
><br>
> This is expected.<br>
><br>
><br>
><br>
> Kind regards,<br>
> Caspar<br>
><br>
> 2016-03-14 12:07 GMT+01:00 Caspar Smit <<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a><br>
</div></div>> <mailto:<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a>>>:<br>
><br>
> Signed-off-by: Caspar Smit <<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a><br>
> <mailto:<a href="mailto:casparsmit@supernas.eu">casparsmit@supernas.eu</a>>><br>
<div><div class="h5">> ---<br>
> PVE/API2/Nodes.pm | 9 ++++++---<br>
> 1 file changed, 6 insertions(+), 3 deletions(-)<br>
><br>
> diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm<br>
> index f1fb392..b2de907 100644<br>
> --- a/PVE/API2/Nodes.pm<br>
> +++ b/PVE/API2/Nodes.pm<br>
> @@ -1208,9 +1208,6 @@ my $get_start_stop_list = sub {<br>
> $startup = { order => $bootorder };<br>
> }<br>
><br>
> - # skip ha managed VMs (started by pve-ha-manager)<br>
> - return if PVE::HA::Config::vm_is_ha_managed($vmid);<br>
> -<br>
> $resList->{$startup->{order}}->{$vmid} = $startup;<br>
> $resList->{$startup->{order}}->{$vmid}->{type} =<br>
> $d->{type};<br>
> };<br>
> @@ -1283,6 +1280,9 @@ __PACKAGE__->register_method ({<br>
> die "unknown VM type '$d->{type}'\n";<br>
> }<br>
><br>
> + # skip ha managed VMs (started by<br>
> pve-ha-manager)<br>
> + next if<br>
> PVE::HA::Config::vm_is_ha_managed($vmid);<br>
> +<br>
> PVE::Cluster::check_cfs_quorum(); # abort<br>
> when we loose quorum<br>
><br>
> eval {<br>
> @@ -1407,6 +1407,9 @@ __PACKAGE__->register_method ({<br>
> };<br>
><br>
> foreach my $vmid (sort {$b <=> $a} keys %$vmlist) {<br>
> + # skip ha managed VMs (stopped by<br>
> pve-ha-manager)<br>
> + next if<br>
> PVE::HA::Config::vm_is_ha_managed($vmid);<br>
> +<br>
> my $d = $vmlist->{$vmid};<br>
> my $upid;<br>
> eval { $upid =<br>
> &$create_stop_worker($nodename, $d->{type}, $vmid, $d->{down}); };<br>
> --<br>
> 2.1.4<br>
><br>
><br>
><br>
> _______________________________________________<br>
> pve-devel mailing list<br>
</div></div>> <a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a> <mailto:<a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a>><br>
> <a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel</a><br>
<div class="HOEnZb"><div class="h5">><br>
><br>
><br>
><br>
> _______________________________________________<br>
> pve-devel mailing list<br>
> <a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a><br>
> <a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel</a><br>
><br>
<br>
_______________________________________________<br>
pve-devel mailing list<br>
<a href="mailto:pve-devel@pve.proxmox.com">pve-devel@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel</a><br>
</div></div></blockquote></div><br></div></div>