[pve-devel] corosync bug: cluster break after 1 node clean shutdown
Alexandre DERUMIER
aderumier at odiso.com
Thu Sep 10 13:34:44 CEST 2020
>>as said, if the other nodes where not using HA, the watchdog-mux had no
>>client which could expire.
sorry, maybe I have wrong explained it,
but all my nodes had HA enabled.
I have double check lrm_status json files from my morning backup 2h before the problem,
they were all in "active" state. ("state":"active","mode":"active" )
I don't why node7 don't have rebooted, the only difference is that is was the crm master.
(I think crm also reset the watchdog counter ? maybe behaviour is different than lrm ?)
>>above lines also indicate very high load.
>>Do you have some monitoring which shows the CPU/IO load before/during this event?
load (1,5,15 ) was: 6 (for 48cores), cpu usage: 23%
no iowait on disk (vms are on a remote ceph, only proxmox services are running on local ssd disk)
so nothing strange here :/
----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht at proxmox.com>
À: "Proxmox VE development discussion" <pve-devel at lists.proxmox.com>, "Alexandre Derumier" <aderumier at odiso.com>
Envoyé: Jeudi 10 Septembre 2020 10:21:48
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown
On 10.09.20 06:58, Alexandre DERUMIER wrote:
> Thanks Thomas for the investigations.
>
> I'm still trying to reproduce...
> I think I have some special case here, because the user of the forum with 30 nodes had corosync cluster split. (Note that I had this bug 6 months ago,when shuting down a node too, and the only way was stop full stop corosync on all nodes, and start corosync again on all nodes).
>
>
> But this time, corosync logs looks fine. (every node, correctly see node2 down, and see remaning nodes)
>
> surviving node7, was the only node with HA, and LRM didn't have enable watchog (I don't have found any log like "pve-ha-lrm: watchdog active" for the last 6months on this nodes
>
>
> So, the timing was:
>
> 10:39:05 : "halt" command is send to node2
> 10:39:16 : node2 is leaving corosync / halt -> every node is seeing it and correctly do a new membership with 13 remaining nodes
>
> ...don't see any special logs (corosync,pmxcfs,pve-ha-crm,pve-ha-lrm) after the node2 leaving.
> But they are still activity on the server, pve-firewall is still logging, vms are running fine
>
>
> between 10:40:25 - 10:40:34 : watchdog reset nodes, but not node7.
>
> -> so between 70s-80s after the node2 was done, so I think that watchdog-mux was still running fine until that.
> (That's sound like lrm was stuck, and client_watchdog_timeout have expired in watchdog-mux)
as said, if the other nodes where not using HA, the watchdog-mux had no
client which could expire.
>
> 10:40:41 node7, loose quorum (as all others nodes have reset),
> 10:40:50: node7 crm/lrm finally log.
>
> Sep 3 10:40:50 m6kvm7 pve-ha-crm[16196]: got unexpected error - error during cfs-locked 'domain-ha' operation: no quorum!
> Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: loop take too long (87 seconds)
> Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: loop take too long (92 seconds)
above lines also indicate very high load.
Do you have some monitoring which shows the CPU/IO load before/during this event?
> Sep 3 10:40:51 m6kvm7 pve-ha-crm[16196]: lost lock 'ha_manager_lock - cfs lock update failed - Permission denied
> Sep 3 10:40:51 m6kvm7 pve-ha-lrm[16140]: lost lock 'ha_agent_m6kvm7_lock - cfs lock update failed - Permission denied
>
>
>
> So, I really think that something have stucked lrm/crm loop, and watchdog was not resetted because of that.
>
More information about the pve-devel
mailing list