[pve-devel] applied: [PATCH cluster] pmxcfs sync: properly check for corosync error

Thomas Lamprecht t.lamprecht at proxmox.com
Fri Sep 25 15:48:58 CEST 2020


On 25.09.20 15:36, Fabian Grünbichler wrote:
> 
>> Thomas Lamprecht <t.lamprecht at proxmox.com> hat am 25.09.2020 15:23 geschrieben:
>>
>>  
>> On 25.09.20 14:53, Fabian Grünbichler wrote:
>>> dfsm_send_state_message_full always returns != 0, since it returns
>>> cs_error_t which starts with CS_OK at 1, with values >1 representing
>>> errors.
>>>
>>> Signed-off-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
>>> ---
>>> unfortunately not that cause of Alexandre's shutdown/restart issue, but
>>> might have caused some hangs as well since we would be stuck in
>>> START_SYNC in that case..
>>>
>>>  data/src/dfsm.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>>
>>
>> applied, thanks! But as the old wrong code showed up as critical error
>> "failed to send SYNC_START message" if it worked, it either (almost) never
>> works here or is not a probable case, else we'd saw this earlier.
>>
>> (still a valid and appreciated fix, just noting)
> 
> no, the old wrong code never triggered the error handling (log + leave), no matter whether the send worked or failed - the return value cannot be 0, so the condition is never true. if the send failed, the code assumed the state machine is now in START_SYNC mode and waits for STATE messages, which will never come since the other nodes haven't switched to START_SYNC..
> 

ah yeah, was confused about the CS_OK value for a moment


> it would still show up in the logs since cpg_mcast_joined failure is always verbose in the logs, but it would not be obvious that it caused the state machine to take a wrong turn I think.
> 







More information about the pve-devel mailing list