[pve-devel] corosync bug: cluster break after 1 node clean shutdown

Fabian Grünbichler f.gruenbichler at proxmox.com
Thu Sep 17 11:21:45 CEST 2020


On September 16, 2020 5:17 pm, Alexandre DERUMIER wrote:
> I have produce it again, with the coredump this time
> 
> 
> restart corosync : 17:05:27
> 
> http://odisoweb1.odiso.net/pmxcfs-corosync2.log
> 
> 
> bt full
> 
> https://gist.github.com/aderumier/466dcc4aedb795aaf0f308de0d1c652b
> 
> 
> coredump
> 
> 
> http://odisoweb1.odiso.net/core.7761.gz

just a short update on this:

dcdb is stuck in START_SYNC mode, but nodeid 13 hasn't sent a STATE msg 
(yet). this looks like either the START_SYNC message to node 13, or the 
STATE response from it got lost or processed wrong. until the mode
switches to SYNCED (after all states have been received and the state 
update went through), regular/normal messages can be sent, but the 
incoming normal messages are queued and not processed. this is why the 
fuse access blocks, it sends the request out, but the response ends up 
in the queue.

status (the other thing running on top of dfsm) got correctly synced up 
at the same time, so it's either a dcdb specific bug, or just bad luck 
that one was affected and the other wasn't.

unfortunately even with debug enabled the logs don't contain much 
information that would help (e.g., we don't log sending/receiving STATE 
messages except when they look 'wrong'), so Thomas is trying to 
reproduce this using your scenario here to improve turn around time. if 
we can't reproduce it, we'll have to send you patches/patched debs with 
increased logging to narrow down what is going on. if we can, than we 
can hopefully find and fix the issue fast.





More information about the pve-devel mailing list