[pve-devel] [RFC ha-manager] manager: clear stale maintenance node in edge case

Fiona Ebner f.ebner at proxmox.com
Wed Mar 22 17:21:30 CET 2023


where the whole cluster was shut down at the same time and the
service was never started on another node since the maintenance node
was set.

If a user ends up in this edge case, it would be rather surprising
that the service would ignore the rebalance on start setting and would
be automatically migrated back to the "maintenance node" which
actually is not in maintenance mode anymore after a migration away
from it.

Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
---

We could also think about doing the check more broadly in manage(), in
preparation to add a feature where stopped services are also migrated
during maintenance also. But that needs more consideration.

 src/PVE/HA/Manager.pm                         | 18 +++++
 src/test/test-stale-maintenance-node/cmdlist  |  5 ++
 .../datacenter.cfg                            |  5 ++
 .../hardware_status                           |  5 ++
 .../test-stale-maintenance-node/log.expect    | 76 +++++++++++++++++++
 .../manager_status                            |  1 +
 .../service_config                            |  3 +
 7 files changed, 113 insertions(+)
 create mode 100644 src/test/test-stale-maintenance-node/cmdlist
 create mode 100644 src/test/test-stale-maintenance-node/datacenter.cfg
 create mode 100644 src/test/test-stale-maintenance-node/hardware_status
 create mode 100644 src/test/test-stale-maintenance-node/log.expect
 create mode 100644 src/test/test-stale-maintenance-node/manager_status
 create mode 100644 src/test/test-stale-maintenance-node/service_config

diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 0d0cad2..59e5cfe 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -907,6 +907,24 @@ sub next_state_started {
 			)
 		    );
 		}
+
+		if ($sd->{maintenance_node} && $sd->{node} eq $sd->{maintenance_node}) {
+		    my $node_state = $ns->get_node_state($sd->{node});
+		    if ($node_state eq 'online') {
+			# Having the maintenance node set here means that the service was never
+			# started on a different node since it was set. This can happen in the edge
+			# case that the whole cluster is shut down at the same time while the
+			# 'migrate' policy was configured. Node is not in maintenance mode anymore
+			# and service is started on this node, so it's fine to clear the setting.
+			$haenv->log(
+			    'info',
+			    "service '$sid': clearing stale maintenance node "
+				."'$sd->{maintenance_node}' setting (is current node)",
+			);
+			delete $sd->{maintenance_node};
+		    }
+		}
+
 		# ensure service get started again if it went unexpected down
 		# but ensure also no LRM result gets lost
 		$sd->{uid} = compute_new_uuid($sd->{state}) if defined($lrm_res);
diff --git a/src/test/test-stale-maintenance-node/cmdlist b/src/test/test-stale-maintenance-node/cmdlist
new file mode 100644
index 0000000..34bf737
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "shutdown node1", "shutdown node2", "shutdown node3"],
+    [ "power node1 on", "power node2 on", "power node3 on"]
+]
diff --git a/src/test/test-stale-maintenance-node/datacenter.cfg b/src/test/test-stale-maintenance-node/datacenter.cfg
new file mode 100644
index 0000000..de0bf81
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/datacenter.cfg
@@ -0,0 +1,5 @@
+{
+    "ha": {
+        "shutdown_policy": "migrate"
+    }
+}
diff --git a/src/test/test-stale-maintenance-node/hardware_status b/src/test/test-stale-maintenance-node/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-stale-maintenance-node/log.expect b/src/test/test-stale-maintenance-node/log.expect
new file mode 100644
index 0000000..cd1fb81
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/log.expect
@@ -0,0 +1,76 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:103' on node 'node1'
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node1)
+info     21    node1/lrm: got lock 'ha_agent_node1_lock'
+info     21    node1/lrm: status change wait_for_agent_lock => active
+info     21    node1/lrm: starting service vm:103
+info     21    node1/lrm: service status vm:103 started
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute shutdown node1
+info    120    node1/lrm: got shutdown request with shutdown policy 'migrate'
+info    120    node1/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info    120      cmdlist: execute shutdown node2
+info    120    node2/lrm: got shutdown request with shutdown policy 'migrate'
+info    120    node2/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info    120      cmdlist: execute shutdown node3
+info    120    node3/lrm: got shutdown request with shutdown policy 'migrate'
+info    120    node3/lrm: shutdown LRM, doing maintenance, removing this node from active list
+info    120    node1/crm: node 'node1': state changed from 'online' => 'maintenance'
+info    120    node1/crm: node 'node2': state changed from 'online' => 'maintenance'
+info    120    node1/crm: node 'node3': state changed from 'online' => 'maintenance'
+info    121    node1/lrm: status change active => maintenance
+info    124    node2/lrm: exit (loop end)
+info    124     shutdown: execute crm node2 stop
+info    123    node2/crm: server received shutdown request
+info    126    node3/lrm: exit (loop end)
+info    126     shutdown: execute crm node3 stop
+info    125    node3/crm: server received shutdown request
+info    143    node2/crm: exit (loop end)
+info    143     shutdown: execute power node2 off
+info    144    node3/crm: exit (loop end)
+info    144     shutdown: execute power node3 off
+info    160    node1/crm: status change master => lost_manager_lock
+info    160    node1/crm: status change lost_manager_lock => wait_for_quorum
+info    161    node1/lrm: status change maintenance => lost_agent_lock
+err     161    node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+err     181    node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+err     201    node1/lrm: get shutdown request in state 'lost_agent_lock' - detected 1 running services
+info    202     watchdog: execute power node1 off
+info    201    node1/crm: killed by poweroff
+info    202    node1/lrm: killed by poweroff
+info    202     hardware: server 'node1' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node1 on
+info    220    node1/crm: status change startup => wait_for_quorum
+info    220    node1/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute power node3 on
+info    220    node3/crm: status change startup => wait_for_quorum
+info    220    node3/lrm: status change startup => wait_for_agent_lock
+info    220    node1/crm: status change wait_for_quorum => master
+info    221    node1/lrm: status change wait_for_agent_lock => active
+info    221    node1/lrm: starting service vm:103
+info    221    node1/lrm: service status vm:103 started
+info    222    node2/crm: status change wait_for_quorum => slave
+info    224    node3/crm: status change wait_for_quorum => slave
+info    240    node1/crm: node 'node1': state changed from 'maintenance' => 'online'
+info    240    node1/crm: node 'node2': state changed from 'maintenance' => 'online'
+info    240    node1/crm: node 'node3': state changed from 'maintenance' => 'online'
+info    240    node1/crm: service 'vm:103': clearing stale maintenance node 'node1' setting (is current node)
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-stale-maintenance-node/manager_status b/src/test/test-stale-maintenance-node/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-stale-maintenance-node/service_config b/src/test/test-stale-maintenance-node/service_config
new file mode 100644
index 0000000..cfed86f
--- /dev/null
+++ b/src/test/test-stale-maintenance-node/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:103": { "node": "node1", "state": "enabled" }
+}
-- 
2.30.2






More information about the pve-devel mailing list