[pve-devel] [PATCH v2 3/3] Allow migrate-all button on HA enabled VMs

Caspar Smit casparsmit at supernas.eu
Thu Mar 17 14:12:08 CET 2016


2016-03-17 14:04 GMT+01:00 Thomas Lamprecht <t.lamprecht at proxmox.com>:

> Sorry wrote that other mail from mobile and as it seems it should search
> another mail client, the current one doesn't know to quote.
>
> I know that it's not ideal behaviour all the time but we hadn't any good
> idea how to solve that (with a small simple nice patch) for the HA stack.
> But limiting (or expanding) the max_workers settings to match your setup
> is always a good idea so its a reasonable "workaround".
>

I tested with the max_workers set to 1 in my datacenter.cfg and now the
migrateall task still says taks OK fairly quickly but the actual migrations
now take place 1 at a time which is fine by me :)

I also noticed the max_workers setting is not configurable in the GUI
(Under Datacenter->Options) would that be a wanted feature i could
implement and create a patch for?


> btw. I picked your patch series up in my queue and will send a pull
> request, did you already signed and sent our CLA? - as mentioned
> previously this is needed to protect you and us legal wise, before that
> isn't done we cannot include those patches, I'm afraid.
>

Yes, i signed and mailed the CLA 3 days ago and received a confirmation
from your office.


> If you already did then I'll send it tomorrow.
>

Thanks!

Caspar

cheers,
> Thomas
>
> On 17.03.2016 13:44, Caspar Smit wrote:
> > Thomas,
> >
> > Ahh i see, thank you for clarifying!
> >
> > Caspar
> >
> > 2016-03-17 13:27 GMT+01:00 Thomas Lamprecht <t.lamprecht at proxmox.com
> > <mailto:t.lamprecht at proxmox.com>>:
> >
> >     Comments inline.
> >
> >     ----- Rispondi al messaggio -----
> >     Da: "Caspar Smit" <casparsmit at supernas.eu
> >     <mailto:casparsmit at supernas.eu>>
> >     A: "PVE development discussion" <pve-devel at pve.proxmox.com
> >     <mailto:pve-devel at pve.proxmox.com>>
> >     Oggetto: [pve-devel] [PATCH v2 3/3] Allow migrate-all button on HA
> >     enabled VMs
> >     Data: gio, mar 17, 2016 11:55
> >
> >     Hi all,
> >
> >     During some more tests with this feature i (maybe) stumbled on a bug
> >     (or maybe this was by design).
> >
> >     When I select the migrate-all button and set the "parallel jobs"
> >     option to 1 i noticed the HA managed VM's were migrated at the same
> >     time (so it looks like the parallel jobs option is ignored).
> >     But i found out why this is:
> >
> >     When a HA managed VM is migrated a "HA <vmid> - Migrate" task is
> >     spawned. This task returns an OK status way BEFORE the actual
> >     migration has taken place. The "HA <vmid> - Migrate" task spawns
> >     another task which does the actual migration called "VM <vmid> -
> >     Migrate".
> >
> >     Now I remember from PVE 3.4 that the "HA <vmid> - Migrate" task did
> >     not return an OK until the actual "VM <vmid> - Migrate" returned an
> >     OK. Was this changed on purpose or is this a bug?
> >
> >
> >
> >     This is by design. The HA stack consists out of the local resource
> >     manager and the Cluster resource mamager which work synced with each
> >     other but async from the cluster.
> >
> >     You can limit the concurrent migrations by setting the max_worker
> >     setting in datacenter.cfg
> >     Users should limit that if there setup cannot handle that much
> >     migrations parallel.
> >
> >
> >
> >     The result here is that the migrate-all task receives an OK (from
> >     the HA task) and starts the next migration resulting in multiple HA
> >     migrations happen at once.
> >
> >
> >     This is expected.
> >
> >
> >
> >     Kind regards,
> >     Caspar
> >
> >     2016-03-14 12:07 GMT+01:00 Caspar Smit <casparsmit at supernas.eu
> >     <mailto:casparsmit at supernas.eu>>:
> >
> >         Signed-off-by: Caspar Smit <casparsmit at supernas.eu
> >         <mailto:casparsmit at supernas.eu>>
> >         ---
> >          PVE/API2/Nodes.pm | 9 ++++++---
> >          1 file changed, 6 insertions(+), 3 deletions(-)
> >
> >         diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
> >         index f1fb392..b2de907 100644
> >         --- a/PVE/API2/Nodes.pm
> >         +++ b/PVE/API2/Nodes.pm
> >         @@ -1208,9 +1208,6 @@ my $get_start_stop_list = sub {
> >                         $startup = { order => $bootorder };
> >                     }
> >
> >         -           # skip ha managed VMs (started by pve-ha-manager)
> >         -           return if PVE::HA::Config::vm_is_ha_managed($vmid);
> >         -
> >                     $resList->{$startup->{order}}->{$vmid} = $startup;
> >                     $resList->{$startup->{order}}->{$vmid}->{type} =
> >         $d->{type};
> >                 };
> >         @@ -1283,6 +1280,9 @@ __PACKAGE__->register_method ({
> >                                 die "unknown VM type '$d->{type}'\n";
> >                             }
> >
> >         +                   # skip ha managed VMs (started by
> >         pve-ha-manager)
> >         +                   next if
> >         PVE::HA::Config::vm_is_ha_managed($vmid);
> >         +
> >                             PVE::Cluster::check_cfs_quorum(); # abort
> >         when we loose quorum
> >
> >                             eval {
> >         @@ -1407,6 +1407,9 @@ __PACKAGE__->register_method ({
> >                         };
> >
> >                         foreach my $vmid (sort {$b <=> $a} keys
> %$vmlist) {
> >         +                   # skip ha managed VMs (stopped by
> >         pve-ha-manager)
> >         +                   next if
> >         PVE::HA::Config::vm_is_ha_managed($vmid);
> >         +
> >                             my $d = $vmlist->{$vmid};
> >                             my $upid;
> >                             eval { $upid =
> >         &$create_stop_worker($nodename, $d->{type}, $vmid, $d->{down});
> };
> >         --
> >         2.1.4
> >
> >
> >
> >     _______________________________________________
> >     pve-devel mailing list
> >     pve-devel at pve.proxmox.com <mailto:pve-devel at pve.proxmox.com>
> >     http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> >
> >
> >
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel at pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20160317/70d34be8/attachment.htm>


More information about the pve-devel mailing list