This is an automated email from the ASF dual-hosted git repository. vatamane pushed a commit to branch replicate-purges in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 87dbdf8d0a8c4232156671651c065bd945d00f69 Author: Nick Vatamaniuc <[email protected]> AuthorDate: Mon Jun 13 17:21:27 2022 -0400 Enable replicating purge requests between nodes It seems we had forgotten to enable it and in the case a node if off-line when a clustered purge request is issued, and then re-join they won't know to process any purge requests issued to other nodes. --- rel/overlay/etc/default.ini | 6 ++++++ src/mem3/src/mem3_rep.erl | 2 +- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/rel/overlay/etc/default.ini b/rel/overlay/etc/default.ini index 162ccb926..5a8b25454 100644 --- a/rel/overlay/etc/default.ini +++ b/rel/overlay/etc/default.ini @@ -278,6 +278,12 @@ bind_address = 127.0.0.1 ; shard_cache_size = 25000 ; shards_db = _dbs ; sync_concurrency = 10 +; +; When enabled, internal replicator will replicate purge requests between shard +; copies. It may be helpful to disable it temporarily when doing rolling node +; upgrades from CouchDB versions 2.x to 3.x since older 2.x node do not have +; the necessary API function to handled clustered purges. +;replicate_purges = true ; [fabric] ; all_docs_concurrency = 10 diff --git a/src/mem3/src/mem3_rep.erl b/src/mem3/src/mem3_rep.erl index afb3bc72b..42f1269cb 100644 --- a/src/mem3/src/mem3_rep.erl +++ b/src/mem3/src/mem3_rep.erl @@ -301,7 +301,7 @@ repl(#acc{db = Db0} = Acc0) -> Acc1 = calculate_start_seq_multi(Acc0), try Acc3 = - case config:get_boolean("mem3", "replicate_purges", false) of + case config:get_boolean("mem3", "replicate_purges", true) of true -> Acc2 = pull_purges_multi(Acc1), push_purges_multi(Acc2);
