if processing a corosync.conf update is delayed on a single node, reloading the config too early can have disastrous results (loss of token and HA fence). artifically delay the reload command by one second to allow update propagation in most scenarios until a proper solution (e.g., using broadcasting/querying of locally deployed config versions) has been developed and fully tested.
reported on the forum: https://forum.proxmox.com/threads/expanding-cluster-reboots-all-vms.110903/ reported issue can be reproduced by deploying a patched pmxcfs on non-reloading node that sleeps before writing out a broadcasted corosync.conf update and adding a node to the cluster, leading to the following sequence of events: - corosync config reload command received - corosync config update written out which causes that particular node to have a different view of cluster topology, causing all corosync communication to fail for all nodes until corosync on the affected node is restarted (the on-disk config is correct after all, just not in effect). Signed-off-by: Fabian Grünbichler <f.gruenbich...@proxmox.com> --- tested new cluster creation from scratch, and cluster expansion (on a test PVE cluster with HA enabled and running guests, to simulate some load). data/src/dcdb.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/data/src/dcdb.c b/data/src/dcdb.c index b690355..58351ed 100644 --- a/data/src/dcdb.c +++ b/data/src/dcdb.c @@ -410,6 +410,12 @@ dcdb_sync_corosync_conf( HOST_CLUSTER_CONF_FN, new_version); if (notify_corosync && old_version) { + /* + * sleep for 1s to hopefully allow new config to propagate + * FIXME: actually query the status somehow? + */ + sleep(1); + /* tell corosync that there is a new config file */ cfs_debug ("run corosync-cfgtool -R"); int status = system("corosync-cfgtool -R >/dev/null 2>&1"); -- 2.30.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel