[pve-devel] corosync bug: cluster break after 1 node clean shutdown
Alexandre DERUMIER
aderumier at odiso.com
Tue Sep 29 07:30:22 CEST 2020
also for test5,
I have restarted corosync on node1 at 17:54:05, this have unlocked /etc/pve on other nodes
I have submitted logs too : "corosync-restart-nodeX.log"
----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "Proxmox VE development discussion" <pve-devel at lists.proxmox.com>
Envoyé: Lundi 28 Septembre 2020 17:59:20
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown
Here a new test http://odisoweb1.odiso.net/test5
This has occured at corosync start
node1:
-----
start corosync : 17:30:19
node2: /etc/pve locked
--------------
Current time : 17:30:24
I have done backtrace of all nodes at same time with parallel ssh at 17:35:22
and a coredump of all nodes at same time with parallel ssh at 17:42:26
(Note that this time, /etc/pve was still locked after backtrace/coredump)
----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "Proxmox VE development discussion" <pve-devel at lists.proxmox.com>
Envoyé: Lundi 28 Septembre 2020 11:35:00
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown
>>but, just to make sure, could you reproduce the issue once more, and
>>then (with debug symbols installed) run
>>
>>$ gdb -ex 'set pagination 0' -ex 'thread apply all bt' --batch -p $(pidof pmxcfs)
>>
>>on all nodes at the same time? this should minimize the fallout and show
>>us whether the thread that logged the first part of sending the state is
>>still around on the node that triggers the hang..
ok, no problem, I'll do a new test today
>>this looks like you bt-ed corosync this time around? if so, than this is
>>expected:
>>
>>- attach gdb to corosync
>>- corosync blocks
>>- other nodes notice corosync is gone on node X
>>- config change triggered
>>- sync restarts on all nodes, does not trigger bug this time
ok, thanks, make sense. I didn't notice this previously, but this time it was with corosync started.
----- Mail original -----
De: "Fabian Grünbichler" <f.gruenbichler at proxmox.com>
À: "Proxmox VE development discussion" <pve-devel at lists.proxmox.com>
Envoyé: Lundi 28 Septembre 2020 11:17:37
Objet: Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown
On September 25, 2020 6:29 pm, Alexandre DERUMIER wrote:
> here a new hang:
>
> http://odisoweb1.odiso.net/test4/
okay, so at least we now know something strange inside pmxcfs is going
on, and not inside corosync - we never reach the part where the broken
node (again #13 in this hang!) sends out the state message:
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] debug: dfsm mode is 1 (dfsm.c:706:dfsm_cpg_deliver_callback)
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] debug: received message's header type is 1 (dfsm.c:736:dfsm_cpg_deliver_callback)
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] debug: received state message (type = 1, subtype = 0, 32 bytes), mode is 1 (dfsm.c:782:dfsm_cpg_deliver_callback)
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] notice: received sync request (epoch 1/23389/000000A8) (dfsm.c:890:dfsm_cpg_deliver_callback)
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [main] debug: enter dfsm_release_sync_resources (dfsm.c:640:dfsm_release_sync_resources)
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] debug: enter dcdb_get_state 00000000365345C2 5F6E0B1F (dcdb.c:444:dcdb_get_state)
[...]
Sep 25 17:22:08 m6kvm13 pmxcfs[10086]: [dcdb] debug: dfsm_send_state_message_full: type 2 len 1 (dfsm.c:281:dfsm_send_state_message_full)
this should be followed by output like this (from the -unblock log,
where the sync went through):
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: send state message debug (dfsm.c:241:dfsm_send_message_full_debug_state)
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: cpg_handle 5109470997161967616 (dfsm.c:242:dfsm_send_message_full_debug_state)
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: iov[0] len 32 (dfsm.c:244:dfsm_send_message_full_debug_state)
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: iov[1] len 64792 (dfsm.c:244:dfsm_send_message_full_debug_state)
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: send state message loop body (dfsm.c:246:dfsm_send_message_full_debug_state)
Sep 25 17:57:39 m6kvm13 pmxcfs[10086]: [dcdb] notice: send state message result: 1 (dfsm.c:250:dfsm_send_message_full_debug_state)
but this never happens. the relevant code from our patched dfsm.c
(reordered):
static cs_error_t
dfsm_send_state_message_full(
dfsm_t *dfsm,
uint16_t type,
struct iovec *iov,
unsigned int len)
{
g_return_val_if_fail(dfsm != NULL, CS_ERR_INVALID_PARAM);
g_return_val_if_fail(DFSM_VALID_STATE_MESSAGE(type), CS_ERR_INVALID_PARAM);
g_return_val_if_fail(!len || iov != NULL, CS_ERR_INVALID_PARAM);
// this message is still logged
cfs_dom_debug(dfsm->log_domain, "dfsm_send_state_message_full: type %d len %d", type, len);
// everything below this point might not have happened anymore
dfsm_message_state_header_t header;
header.base.type = type;
header.base.subtype = 0;
header.base.protocol_version = dfsm->protocol_version;
header.base.time = time(NULL);
header.base.reserved = 0;
header.epoch = dfsm->sync_epoch;
struct iovec real_iov[len + 1];
real_iov[0].iov_base = (char *)&header;
real_iov[0].iov_len = sizeof(header);
for (int i = 0; i < len; i++)
real_iov[i + 1] = iov[i];
return type == DFSM_MESSAGE_STATE ? dfsm_send_message_full_debug_state(dfsm, real_iov, len + 1, 1) : dfsm_send_message_full(dfsm, real_iov, len + 1, 1);
}
static cs_error_t
dfsm_send_message_full_debug_state(
dfsm_t *dfsm,
struct iovec *iov,
unsigned int len,
int retry)
{
g_return_val_if_fail(dfsm != NULL, CS_ERR_INVALID_PARAM);
g_return_val_if_fail(!len || iov != NULL, CS_ERR_INVALID_PARAM);
struct timespec tvreq = { .tv_sec = 0, .tv_nsec = 100000000 };
cs_error_t result;
int retries = 0;
// this message is not visible in log
// we don't know how far above this we managed to run
cfs_dom_message(dfsm->log_domain, "send state message debug");
cfs_dom_message(dfsm->log_domain, "cpg_handle %" PRIu64, dfsm->cpg_handle);
for (int i = 0; i < len; i++)
cfs_dom_message(dfsm->log_domain, "iov[%d] len %zd", i, iov[i].iov_len);
loop:
cfs_dom_message(dfsm->log_domain, "send state message loop body");
result = cpg_mcast_joined(dfsm->cpg_handle, CPG_TYPE_AGREED, iov, len);
cfs_dom_message(dfsm->log_domain, "send state message result: %d", result);
if (retry && result == CS_ERR_TRY_AGAIN) {
nanosleep(&tvreq, NULL);
++retries;
if ((retries % 10) == 0)
cfs_dom_message(dfsm->log_domain, "cpg_send_message retry %d", retries);
if (retries < 100)
goto loop;
}
if (retries)
cfs_dom_message(dfsm->log_domain, "cpg_send_message retried %d times", retries);
if (result != CS_OK &&
(!retry || result != CS_ERR_TRY_AGAIN))
cfs_dom_critical(dfsm->log_domain, "cpg_send_message failed: %d", result);
return result;
}
I don't see much that could go wrong inside dfsm_send_state_message_full
after the debug log statement (it's just filling out the header and iov
structs, and calling 'dfsm_send_message_full_debug_state' since type ==
2 == DFSM_MESSAGE_STATE).
inside dfsm_send_message_full_debug, before the first log statement we
only check for dfsm or the message length/content being NULL/0, all of
which can't really happen with that call path. also, in that case we'd
return CS_ERR_INVALID_PARAM , which would bubble up into the delivery
callback and cause us to leave CPG, which would again be visible in the
logs..
but, just to make sure, could you reproduce the issue once more, and
then (with debug symbols installed) run
$ gdb -ex 'set pagination 0' -ex 'thread apply all bt' --batch -p $(pidof pmxcfs)
on all nodes at the same time? this should minimize the fallout and show
us whether the thread that logged the first part of sending the state is
still around on the node that triggers the hang..
> Something new: where doing coredump or bt-full on pmxcfs on node1,
> this have unlocked /etc/pve on other nodes
>
> /etc/pve unlocked(with coredump or bt-full): 17:57:40
this looks like you bt-ed corosync this time around? if so, than this is
expected:
- attach gdb to corosync
- corosync blocks
- other nodes notice corosync is gone on node X
- config change triggered
- sync restarts on all nodes, does not trigger bug this time
_______________________________________________
pve-devel mailing list
pve-devel at lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel at lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel at lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list