[PVE-User] Locking HA during UPS shutdown
dORSY
dorsyka at yahoo.com
Thu Mar 10 13:08:48 CET 2022
or just monitor nut on vms and shut them down _before_ hosts, so there's nothing to migrate.
Sent from Yahoo Mail on Android
On Thu, Mar 10, 2022 at 12:54 PM, admins at telehouse.solutions<admins at telehouse.solutions> wrote: Hi,
here are two ideas: shutdown sequence -and- command sequence
1: shutdown sequence you may achieve when you set NUT’s on each node to only monitor the UPS power, then configure each node to shutdown itself on a different ups power levels, ex: node1 on 15% battery, node2 on 10% battery and so on
2: you can set a cmd sequence to firstly execute pve node maintenance mode , and then execute shutdown -> this way HA will not try to migrate vm to node in maintenance, and the chance all nodes to goes into maintenance in exactly same second seems to be not a risk at all.
hope thats helpful.
Regards,
Sto.
> On Mar 10, 2022, at 1:10 PM, Stefan Radman via pve-user <pve-user at lists.proxmox.com <mailto:pve-user at lists.proxmox.com>> wrote:
>
>
> From: Stefan Radman <stefan.radman at me.com <mailto:stefan.radman at me.com>>
> Subject: Locking HA during UPS shutdown
> Date: March 10, 2022 at 1:10:09 PM GMT+2
> To: PVE User List <pve-user at pve.proxmox.com <mailto:pve-user at pve.proxmox.com>>
>
>
> Hi
>
> I am configuring a 3 node PVE cluster with integrated Ceph storage.
>
> It is powered by 2 UPS that are monitored by NUT (Network UPS Tools).
>
> HA is configured with 3 groups:
> group pve1 nodes pve1:1,pve2,pve3
> group pve2 nodes pve1,pve2:1,pve3
> group pve3 nodes pve1,pve2,pve3:1
>
> That will normally place the VMs in each group on the corresponding node, unless that node fails.
>
> The cluster is configured to migrate VMs away from a node before shutting it down (Cluster=>Options=>HA Settings: shutdown_policy=migrate).
>
> NUT is configured to shut down the serves once the last of the two UPS is running low on battery.
>
> My problem:
> When NUT starts shutting down the 3 nodes, HA will first try to live-migrate them to another node.
> That live migration process gets stuck because all the nodes are shutting down simultaneously.
> It seems that the whole process runs into a timeout, finally “powers off” all the VMs and shuts down the nodes.
>
> My question:
> Is there a way to “lock” or temporarily de-activate HA before shutting down a node to avoid that deadlock?
>
> Thank you
>
> Stefan
>
>
>
>
>
>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com <mailto:pve-user at lists.proxmox.com>
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Best Regards,
Stoyan Stoyanov Sto | Solutions Manager
| Telehouse.Solutions | ICT Department
| phone/viber: +359 894774934 <tel:+359 894774934>
| telegram: @prostoSto <https://mysignature.io/redirect/skype:prosto.sto?chat>
| skype: prosto.sto <https://mysignature.io/redirect/skype:prosto.sto?chat>
| email: sto at telehouse.solutions <mailto:sto at telehouse.solutions>
| website: www.telehouse.solutions <https://mysig.io/MTRmMTg>
| address: Telepoint #2, Sofia, Bulgaria
<https://mysignature.io/editor/?utm_source=freepixel>
<https://mysig.io/ZDNkNWY>
Save paper. Don’t print
Best Regards,
Stoyan Stoyanov Sto | Solutions Manager
| Telehouse.Solutions | ICT Department
| phone/viber: +359 894774934 <tel:+359 894774934>
| telegram: @prostoSto <https://mysignature.io/redirect/skype:prosto.sto?chat>
| skype: prosto.sto <https://mysignature.io/redirect/skype:prosto.sto?chat>
| email: sto at telehouse.solutions <mailto:sto at telehouse.solutions>
| website: www.telehouse.solutions <https://mysig.io/MTRmMTg>
| address: Telepoint #2, Sofia, Bulgaria
<https://mysignature.io/editor/?utm_source=freepixel>
<https://mysig.io/ZDNkNWY>
Save paper. Don’t print
_______________________________________________
pve-user mailing list
pve-user at lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list