[pve-devel] [RFC cluster] status: clear stale kv stores upon sync

Fiona Ebner f.ebner at proxmox.com
Thu Oct 6 14:54:14 CEST 2022


This avoids that stale kv entries stay around when a node leaves the
CPG. Now, each kv entry will be something a node sent after (or upon
joining) the CPG.

This avoids scenarios where a user of pmxcfs (like pvestatd) on node A
might not yet have had the time to broadcast up-to-date kv entries,
but a user of pmxcfs on node B sees node A as online and uses the
outdated value (which couldn't be detected as outdated).

In particular, this should be helpful for more static information
broadcast by pvestatd which (mostly) doesn't change while a node is
running.

Could also be done as part of the dfsm_confchg() callback, but then
(at least) the additional guarantee of "confchg always happens before
the kvstore messages during sync arrive" is needed (pointed out by
Fabian). It should hold in practice, but dfsm_process_state_update()
doesn't need such guarantees and is also a fitting place to do it.

The cfs_status_clear_other_kvstores() could take a hash table with the
IDs to avoid the quadratic loop, but since the number of nodes in
setups in practice is never too big, it's likely faster than the
additional work of constructing that hash at the call side.

Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
---

Many thanks to Fabian for helpful discussions!

Another alternative to avoid the quadratic loop would be to copy the
hash table, iterating over skip_node_ids once to remove them from the
hash table (but need to make sure the original value isn't freed!) and
then iterate over the hash table once. Not sure how much nicer that
would be though.

 data/src/status.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/data/src/status.c b/data/src/status.c
index 9bceaeb..3da57cf 100644
--- a/data/src/status.c
+++ b/data/src/status.c
@@ -539,6 +539,46 @@ void cfs_status_set_clinfo(
 	g_mutex_unlock (&mutex);
 }
 
+void
+cfs_status_clear_other_kvstores(
+	uint32_t *skip_node_ids,
+	int count)
+{
+	g_mutex_lock (&mutex);
+
+	if (!cfs_status.clinfo || !cfs_status.clinfo->nodes_byid) {
+		goto unlock; /* ignore */
+	}
+
+	GHashTable *ht = cfs_status.clinfo->nodes_byid;
+	GHashTableIter iter;
+	gpointer key, value;
+
+	g_hash_table_iter_init (&iter, ht);
+
+	// Quadratic in the number of nodes, but it's safe to assume that the number is small enough
+	while (g_hash_table_iter_next (&iter, &key, &value)) {
+		uint32_t nodeid = *(uint32_t *)key;
+		cfs_clnode_t *clnode = (cfs_clnode_t *)value;
+		gboolean skip = FALSE;
+
+		for (int i = 0; i < count; i++) {
+			if (nodeid == skip_node_ids[i]) {
+				skip = TRUE;
+				break;
+			}
+		}
+
+		if (!skip && clnode->kvhash) {
+			cfs_debug("clearing kv store of node %d", nodeid);
+			g_hash_table_remove_all(clnode->kvhash);
+		}
+	}
+
+unlock:
+	g_mutex_unlock (&mutex);
+}
+
 static void
 dump_kvstore_versions(
 	GString *str,
@@ -1769,12 +1809,15 @@ dfsm_process_state_update(
 	g_return_val_if_fail(syncinfo != NULL, -1);
 
 	clog_base_t *clog[syncinfo->node_count];
+	uint32_t sync_node_ids[syncinfo->node_count];
 
 	int local_index = -1;
 	for (int i = 0; i < syncinfo->node_count; i++) {
 		dfsm_node_info_t *ni = &syncinfo->nodes[i];
 		ni->synced = 1;
 
+		sync_node_ids[i] = ni->nodeid;
+
 		if (syncinfo->local == ni)
 			local_index = i;
 
@@ -1791,6 +1834,10 @@ dfsm_process_state_update(
 		cfs_critical("unable to merge log files");
 	}
 
+	// Clear our copy of the kvstore of every node that is not part of the current sync. When
+	// such a node joins again, it will sync its current kvstore with cfs_kvstore_sync().
+	cfs_status_clear_other_kvstores(sync_node_ids, syncinfo->node_count);
+
 	cfs_kvstore_sync();
 
 	return 1;
-- 
2.30.2






More information about the pve-devel mailing list