[couchdb] branch transaction_too_large updated (d0404f0 -> ad48e5a)
This is an automated email from the ASF dual-hosted git repository. jiangphcn pushed a change to branch transaction_too_large in repository https://gitbox.apache.org/repos/asf/couchdb.git. discard d0404f0 convert erlfdb_error 2101 to transaction_too_large add ad48e5a convert erlfdb_error 2101 to transaction_too_large This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (d0404f0) \ N -- N -- N refs/heads/transaction_too_large (ad48e5a) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes:
[couchdb] 01/01: convert erlfdb_error 2101 to transaction_too_large
This is an automated email from the ASF dual-hosted git repository. jiangphcn pushed a commit to branch transaction_too_large in repository https://gitbox.apache.org/repos/asf/couchdb.git commit d0404f05b05ca26ca1c59f8413ed64f3f7ca2fa2 Author: jiangph AuthorDate: Tue Oct 13 10:55:21 2020 +0800 convert erlfdb_error 2101 to transaction_too_large --- src/chttpd/src/chttpd.erl | 6 ++ 1 file changed, 6 insertions(+) diff --git a/src/chttpd/src/chttpd.erl b/src/chttpd/src/chttpd.erl index 1a9b19b..04f65c9 100644 --- a/src/chttpd/src/chttpd.erl +++ b/src/chttpd/src/chttpd.erl @@ -362,6 +362,8 @@ catch_error(HttpReq, error, decryption_failed) -> send_error(HttpReq, decryption_failed); catch_error(HttpReq, error, not_ciphertext) -> send_error(HttpReq, not_ciphertext); +catch_error(HttpReq, error, {erlfdb_error,2101}) -> +send_error(HttpReq, {request_transcation_too_large, 2101}); catch_error(HttpReq, Tag, Error) -> Stack = erlang:get_stacktrace(), % TODO improve logging and metrics collection for client disconnects @@ -1009,6 +1011,10 @@ error_info({request_entity_too_large, {bulk_get, Max}}) when is_integer(Max) -> {413, <<"max_bulk_get_count_exceeded">>, integer_to_binary(Max)}; error_info({request_entity_too_large, DocID}) -> {413, <<"document_too_large">>, DocID}; +error_info({request_transcation_too_large, Code}) -> +CodeBin = integer_to_binary(Code), +{413, <<"request_transcation_too_large">>, +<<"The request transcation is too large. (", CodeBin/binary, ")" >>}; error_info({error, security_migration_updates_disabled}) -> {503, <<"security_migration">>, <<"Updates to security docs are disabled during " "security migration.">>};
[couchdb] branch transaction_too_large created (now d0404f0)
This is an automated email from the ASF dual-hosted git repository. jiangphcn pushed a change to branch transaction_too_large in repository https://gitbox.apache.org/repos/asf/couchdb.git. at d0404f0 convert erlfdb_error 2101 to transaction_too_large This branch includes the following new commits: new d0404f0 convert erlfdb_error 2101 to transaction_too_large The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[couchdb] branch feat-disable-custom-reduce-functions updated (403bec9 -> be2f732)
This is an automated email from the ASF dual-hosted git repository. davisp pushed a change to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git. discard 403bec9 Disable custom reduce functions by default add be2f732 Disable custom reduce functions by default This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (403bec9) \ N -- N -- N refs/heads/feat-disable-custom-reduce-functions (be2f732) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: src/couch_views/test/couch_views_custom_red_test.erl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
[couchdb] branch feat-disable-custom-reduce-functions updated (403bec9 -> be2f732)
This is an automated email from the ASF dual-hosted git repository. davisp pushed a change to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git. discard 403bec9 Disable custom reduce functions by default add be2f732 Disable custom reduce functions by default This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (403bec9) \ N -- N -- N refs/heads/feat-disable-custom-reduce-functions (be2f732) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: src/couch_views/test/couch_views_custom_red_test.erl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
[couchdb] branch feat-disable-custom-reduce-functions updated (9345a71 -> 403bec9)
This is an automated email from the ASF dual-hosted git repository. davisp pushed a change to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git. discard 9345a71 Disable custom reduce functions by default add 403bec9 Disable custom reduce functions by default This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (9345a71) \ N -- N -- N refs/heads/feat-disable-custom-reduce-functions (403bec9) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: src/couch_views/src/couch_views_util.erl | 8 1 file changed, 4 insertions(+), 4 deletions(-)
[couchdb] branch feat-disable-custom-reduce-functions updated (9345a71 -> 403bec9)
This is an automated email from the ASF dual-hosted git repository. davisp pushed a change to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git. discard 9345a71 Disable custom reduce functions by default add 403bec9 Disable custom reduce functions by default This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (9345a71) \ N -- N -- N refs/heads/feat-disable-custom-reduce-functions (403bec9) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: src/couch_views/src/couch_views_util.erl | 8 1 file changed, 4 insertions(+), 4 deletions(-)
[couchdb] 01/01: Disable custom reduce functions by default
This is an automated email from the ASF dual-hosted git repository. davisp pushed a commit to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git commit 9345a71c59e3c9e4b27cc37b8979a097031c6b8c Author: Paul J. Davis AuthorDate: Mon Oct 12 14:08:01 2020 -0500 Disable custom reduce functions by default This prevents users from creating custom reduce functions. Custom reduce functions are notoriously difficult to write correctly. This change disables them by default in the eventual 4.0 release. --- rel/overlay/etc/default.ini| 3 + src/chttpd/src/chttpd.erl | 2 + src/couch_views/src/couch_views_reader.erl | 10 +- src/couch_views/src/couch_views_trees.erl | 2 +- src/couch_views/src/couch_views_util.erl | 34 +++- .../test/couch_views_custom_red_test.erl | 194 + 6 files changed, 240 insertions(+), 5 deletions(-) diff --git a/rel/overlay/etc/default.ini b/rel/overlay/etc/default.ini index 3a377c7..745a1c9 100644 --- a/rel/overlay/etc/default.ini +++ b/rel/overlay/etc/default.ini @@ -325,6 +325,9 @@ iterations = 10 ; iterations for password hashing ; Settings for view indexing [couch_views] +; Enable custom reduce functions +;custom_reduce_enabled = false + ; Maximum acceptors waiting to accept view indexing jobs ;max_acceptors = 5 ; diff --git a/src/chttpd/src/chttpd.erl b/src/chttpd/src/chttpd.erl index 1a9b19b..3100694 100644 --- a/src/chttpd/src/chttpd.erl +++ b/src/chttpd/src/chttpd.erl @@ -1017,6 +1017,8 @@ error_info(all_workers_died) -> "request due to overloading or maintenance mode.">>}; error_info(not_implemented) -> {501, <<"not_implemented">>, <<"this feature is not yet implemented">>}; +error_info({disabled, Reason}) -> +{501, <<"disabled">>, Reason}; error_info(timeout) -> {500, <<"timeout">>, <<"The request could not be processed in a reasonable" " amount of time.">>}; diff --git a/src/couch_views/src/couch_views_reader.erl b/src/couch_views/src/couch_views_reader.erl index 3c58627..35ee8a0 100644 --- a/src/couch_views/src/couch_views_reader.erl +++ b/src/couch_views/src/couch_views_reader.erl @@ -245,11 +245,19 @@ get_map_view(Lang, Args, ViewName, Views) -> get_red_view(Lang, Args, ViewName, Views) -> case couch_mrview_util:extract_view(Lang, Args, ViewName, Views) of -{red, {Idx, Lang, View}, _} -> {Idx, Lang, View}; +{red, {Idx, Lang, View}, _} -> check_red_enabled({Idx, Lang, View}); _ -> throw({not_found, missing_named_view}) end. +check_red_enabled({Idx, _Lang, View} = Resp) -> +case lists:nth(Idx, View#mrview.reduce_funs) of +{_, disabled} -> +throw({disabled, <<"Custom reduce functions are disabled.">>}); +_ -> +Resp +end. + expand_keys_args(#mrargs{keys = undefined} = Args) -> [Args]; diff --git a/src/couch_views/src/couch_views_trees.erl b/src/couch_views/src/couch_views_trees.erl index b45750b..d9340ad 100644 --- a/src/couch_views/src/couch_views_trees.erl +++ b/src/couch_views/src/couch_views_trees.erl @@ -323,7 +323,7 @@ make_read_only_reduce_fun(Lang, View, NthRed) -> make_reduce_fun(Lang, #mrview{} = View) -> -RedFuns = [Src || {_, Src} <- View#mrview.reduce_funs], +RedFuns = [Src || {_, Src} <- View#mrview.reduce_funs, Src /= disabled], fun (KVs0, _ReReduce = false) -> KVs1 = expand_dupes(KVs0), diff --git a/src/couch_views/src/couch_views_util.erl b/src/couch_views/src/couch_views_util.erl index 1e3e4be..c4130f4 100644 --- a/src/couch_views/src/couch_views_util.erl +++ b/src/couch_views/src/couch_views_util.erl @@ -65,7 +65,8 @@ ddoc_to_mrst(DbName, #doc{id=Id, body={Fields}}) -> NumViews = fun({_, View}, N) -> {View#mrview{id_num = N}, N+1} end, -{Views, _} = lists:mapfoldl(NumViews, 0, lists:sort(dict:to_list(BySrc))), +{Views0, _} = lists:mapfoldl(NumViews, 0, lists:sort(dict:to_list(BySrc))), +Views1 = maybe_filter_custom_reduce_funs(Views0), Language = couch_util:get_value(<<"language">>, Fields, <<"javascript">>), Lib = couch_util:get_value(<<"lib">>, RawViews, {[]}), @@ -74,12 +75,12 @@ ddoc_to_mrst(DbName, #doc{id=Id, body={Fields}}) -> db_name=DbName, idx_name=Id, lib=Lib, -views=Views, +views=Views1, language=Language, design_opts=DesignOpts, partitioned=Partitioned }, -SigInfo = {Views, Language, DesignOpts, couch_index_util:sort_lib(Lib)}, +SigInfo = {Views1, Language, DesignOpts, couch_index_util:sort_lib(Lib)}, {ok, IdxState#mrst{sig=couch_hash:md5_hash(term_to_binary(SigInfo))}}. @@ -327,6 +328,33 @@ active_tasks_info(ChangesDone, DbName, DDocId, LastSeq, DBSeq) -> }. +maybe_filter_custom_reduce_funs(Views) -> +case config:get_boolean("couch
[couchdb] branch feat-disable-custom-reduce-functions created (now 9345a71)
This is an automated email from the ASF dual-hosted git repository. davisp pushed a change to branch feat-disable-custom-reduce-functions in repository https://gitbox.apache.org/repos/asf/couchdb.git. at 9345a71 Disable custom reduce functions by default This branch includes the following new commits: new 9345a71 Disable custom reduce functions by default The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[couchdb] branch 3.x updated: Properly combine base and extra headers when making replicator requests
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a commit to branch 3.x in repository https://gitbox.apache.org/repos/asf/couchdb.git The following commit(s) were added to refs/heads/3.x by this push: new ce15da0 Properly combine base and extra headers when making replicator requests ce15da0 is described below commit ce15da09a09407cefdf712eea4c2d20d3e83425f Author: Nick Vatamaniuc AuthorDate: Fri Oct 9 18:10:44 2020 -0400 Properly combine base and extra headers when making replicator requests Previously we subtly relied on one set of headers being sorted, then sorted the other set of headers, and ran `lists:ukeymerge/3`. That function, however, needs both arguments to be sorted in order for it to work as expected. If one argument wasn't sorted we could get duplicate headers easily, which is what was observed in testing. A better fix than just sorting both sets of keys, is to use an actual header processing library to combine them so we can account for case insensitivity as well. --- .../src/couch_replicator_httpc.erl | 28 -- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/src/couch_replicator/src/couch_replicator_httpc.erl b/src/couch_replicator/src/couch_replicator_httpc.erl index 4dce319..466bc57 100644 --- a/src/couch_replicator/src/couch_replicator_httpc.erl +++ b/src/couch_replicator/src/couch_replicator_httpc.erl @@ -97,8 +97,8 @@ send_req(HttpDb, Params1, Callback) -> send_ibrowse_req(#httpdb{headers = BaseHeaders} = HttpDb0, Params) -> Method = get_value(method, Params, get), -UserHeaders = lists:keysort(1, get_value(headers, Params, [])), -Headers1 = lists:ukeymerge(1, UserHeaders, BaseHeaders), +UserHeaders = get_value(headers, Params, []), +Headers1 = merge_headers(BaseHeaders, UserHeaders), {Headers2, HttpDb} = couch_replicator_auth:update_headers(HttpDb0, Headers1), Url = full_url(HttpDb, Params), Body = get_value(body, Params, []), @@ -493,3 +493,27 @@ backoff_before_request(Worker, HttpDb, Params) -> Sleep when Sleep == 0 -> ok end. + + +merge_headers(Headers1, Headers2) when is_list(Headers1), is_list(Headers2) -> +Empty = mochiweb_headers:empty(), +Merged = mochiweb_headers:enter_from_list(Headers1 ++ Headers2, Empty), +mochiweb_headers:to_list(Merged). + + +-ifdef(TEST). + +-include_lib("couch/include/couch_eunit.hrl"). + + +merge_headers_test() -> +?assertEqual([], merge_headers([], [])), +?assertEqual([{"a", "x"}], merge_headers([], [{"a", "x"}])), +?assertEqual([{"a", "x"}], merge_headers([{"a", "x"}], [])), +?assertEqual([{"a", "y"}], merge_headers([{"A", "x"}], [{"a", "y"}])), +?assertEqual([{"a", "y"}, {"B", "x"}], merge_headers([{"B", "x"}], +[{"a", "y"}])), +?assertEqual([{"a", "y"}], merge_headers([{"A", "z"}, {"a", "y"}], [])), +?assertEqual([{"a", "y"}], merge_headers([], [{"A", "z"}, {"a", "y"}])). + +-endif.
[couchdb] branch main updated: Removed unused variable in merge headers unit test
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/couchdb.git The following commit(s) were added to refs/heads/main by this push: new 52575df Removed unused variable in merge headers unit test 52575df is described below commit 52575df88ae4b268502c3a4b5e4ca36a46842389 Author: Nick Vatamaniuc AuthorDate: Mon Oct 12 13:07:11 2020 -0400 Removed unused variable in merge headers unit test `Headers1` is not used anywhere --- src/couch_replicator/src/couch_replicator_httpc.erl | 1 - 1 file changed, 1 deletion(-) diff --git a/src/couch_replicator/src/couch_replicator_httpc.erl b/src/couch_replicator/src/couch_replicator_httpc.erl index 53158f9..cffeb86 100644 --- a/src/couch_replicator/src/couch_replicator_httpc.erl +++ b/src/couch_replicator/src/couch_replicator_httpc.erl @@ -507,7 +507,6 @@ merge_headers(Headers1, Headers2) when is_list(Headers1), is_list(Headers2) -> merge_headers_test() -> -Headers1 = [{"a", "x"}, {"a", "y"}, {"A", "z"}], ?assertEqual([], merge_headers([], [])), ?assertEqual([{"a", "x"}], merge_headers([], [{"a", "x"}])), ?assertEqual([{"a", "x"}], merge_headers([{"a", "x"}], [])),
[couchdb] branch fix-replicator-merge-headers-test created (now 7d1492a)
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a change to branch fix-replicator-merge-headers-test in repository https://gitbox.apache.org/repos/asf/couchdb.git. at 7d1492a Removed unused variable in merge headers unit test This branch includes the following new commits: new 7d1492a Removed unused variable in merge headers unit test The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[couchdb] 01/01: Removed unused variable in merge headers unit test
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a commit to branch fix-replicator-merge-headers-test in repository https://gitbox.apache.org/repos/asf/couchdb.git commit 7d1492a11e7b128d6c29e60a243b71c1b4d9a51d Author: Nick Vatamaniuc AuthorDate: Mon Oct 12 13:07:11 2020 -0400 Removed unused variable in merge headers unit test `Headers1` is not used anywhere --- src/couch_replicator/src/couch_replicator_httpc.erl | 1 - 1 file changed, 1 deletion(-) diff --git a/src/couch_replicator/src/couch_replicator_httpc.erl b/src/couch_replicator/src/couch_replicator_httpc.erl index 53158f9..cffeb86 100644 --- a/src/couch_replicator/src/couch_replicator_httpc.erl +++ b/src/couch_replicator/src/couch_replicator_httpc.erl @@ -507,7 +507,6 @@ merge_headers(Headers1, Headers2) when is_list(Headers1), is_list(Headers2) -> merge_headers_test() -> -Headers1 = [{"a", "x"}, {"a", "y"}, {"A", "z"}], ?assertEqual([], merge_headers([], [])), ?assertEqual([{"a", "x"}], merge_headers([], [{"a", "x"}])), ?assertEqual([{"a", "x"}], merge_headers([{"a", "x"}], [])),
[couchdb] 01/01: Properly combine base and extra headers when making replicator requests
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a commit to branch fix-replicator-header-merging in repository https://gitbox.apache.org/repos/asf/couchdb.git commit 68f16c2ce9f07d9f94937200ad18896419220efb Author: Nick Vatamaniuc AuthorDate: Fri Oct 9 18:10:44 2020 -0400 Properly combine base and extra headers when making replicator requests Previously we subtly relied on one set of headers being sorted, then sorted the other set of headers, and ran `lists:ukeymerge/3`. That function, however, needs both arguments to be sorted in order for it to work as expected. If one argument wasn't sorted we could get duplicate headers easily, which is what was observed in testing. A better fix than just sorting both sets of keys, is to use an actual header processing library to combine them so we can account for case insensitivity as well. --- .../src/couch_replicator_httpc.erl | 28 -- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/src/couch_replicator/src/couch_replicator_httpc.erl b/src/couch_replicator/src/couch_replicator_httpc.erl index 4dce319..466bc57 100644 --- a/src/couch_replicator/src/couch_replicator_httpc.erl +++ b/src/couch_replicator/src/couch_replicator_httpc.erl @@ -97,8 +97,8 @@ send_req(HttpDb, Params1, Callback) -> send_ibrowse_req(#httpdb{headers = BaseHeaders} = HttpDb0, Params) -> Method = get_value(method, Params, get), -UserHeaders = lists:keysort(1, get_value(headers, Params, [])), -Headers1 = lists:ukeymerge(1, UserHeaders, BaseHeaders), +UserHeaders = get_value(headers, Params, []), +Headers1 = merge_headers(BaseHeaders, UserHeaders), {Headers2, HttpDb} = couch_replicator_auth:update_headers(HttpDb0, Headers1), Url = full_url(HttpDb, Params), Body = get_value(body, Params, []), @@ -493,3 +493,27 @@ backoff_before_request(Worker, HttpDb, Params) -> Sleep when Sleep == 0 -> ok end. + + +merge_headers(Headers1, Headers2) when is_list(Headers1), is_list(Headers2) -> +Empty = mochiweb_headers:empty(), +Merged = mochiweb_headers:enter_from_list(Headers1 ++ Headers2, Empty), +mochiweb_headers:to_list(Merged). + + +-ifdef(TEST). + +-include_lib("couch/include/couch_eunit.hrl"). + + +merge_headers_test() -> +?assertEqual([], merge_headers([], [])), +?assertEqual([{"a", "x"}], merge_headers([], [{"a", "x"}])), +?assertEqual([{"a", "x"}], merge_headers([{"a", "x"}], [])), +?assertEqual([{"a", "y"}], merge_headers([{"A", "x"}], [{"a", "y"}])), +?assertEqual([{"a", "y"}, {"B", "x"}], merge_headers([{"B", "x"}], +[{"a", "y"}])), +?assertEqual([{"a", "y"}], merge_headers([{"A", "z"}, {"a", "y"}], [])), +?assertEqual([{"a", "y"}], merge_headers([], [{"A", "z"}, {"a", "y"}])). + +-endif.
[couchdb] branch fix-replicator-header-merging created (now 68f16c2)
This is an automated email from the ASF dual-hosted git repository. vatamane pushed a change to branch fix-replicator-header-merging in repository https://gitbox.apache.org/repos/asf/couchdb.git. at 68f16c2 Properly combine base and extra headers when making replicator requests This branch includes the following new commits: new 68f16c2 Properly combine base and extra headers when making replicator requests The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[couchdb-documentation] branch main updated: More inclusive terminology pt 1 (#599)
This is an automated email from the ASF dual-hosted git repository. flimzy pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git The following commit(s) were added to refs/heads/main by this push: new ac84181 More inclusive terminology pt 1 (#599) ac84181 is described below commit ac8418183d5c22556f4e2ea12872ff1a444a6938 Author: Simon Dahlbacka AuthorDate: Mon Oct 12 11:12:59 2020 +0300 More inclusive terminology pt 1 (#599) * use blocklist instead of Blacklist * use allowlist instead of whitelist * use main instead of master (when talking about branches) --- src/config/auth.rst | 2 +- src/config/indexbuilds.rst| 2 +- src/replication/conflicts.rst | 38 +++--- 3 files changed, 21 insertions(+), 21 deletions(-) diff --git a/src/config/auth.rst b/src/config/auth.rst index d0d5f5d..705c771 100644 --- a/src/config/auth.rst +++ b/src/config/auth.rst @@ -231,7 +231,7 @@ Authentication Configuration public_fields = first_name, last_name, contacts, url .. note:: -Using the ``public_fields`` whitelist for user document properties +Using the ``public_fields`` allowlist for user document properties requires setting the :option:`couch_httpd_auth/users_db_public` option to ``true`` (the latter option has no other purpose):: diff --git a/src/config/indexbuilds.rst b/src/config/indexbuilds.rst index fb406c7..103633c 100644 --- a/src/config/indexbuilds.rst +++ b/src/config/indexbuilds.rst @@ -49,7 +49,7 @@ following settings. database. If the difference is larger than the threshold defined here the background job will only be allowed to run in the main queue. Defaults to 1000. -.. config:section:: ken.ignore :: Auto-Indexing Blacklist +.. config:section:: ken.ignore :: Auto-Indexing Blocklist Entries in this configuration section can be used to tell the background indexer to skip over specific database shard files. The key must be the exact name of the shard with the diff --git a/src/replication/conflicts.rst b/src/replication/conflicts.rst index 2341fb9..67675dd 100644 --- a/src/replication/conflicts.rst +++ b/src/replication/conflicts.rst @@ -595,9 +595,9 @@ is an SHA1 hash of the tip commit. If you are replicating with one or more peers, a separate branch is made for each of those peers. For example, you might have:: -master -- my local branch -remotes/foo/master -- branch on peer 'foo' -remotes/bar/master -- branch on peer 'bar' +main -- my local branch +remotes/foo/main -- branch on peer 'foo' +remotes/bar/main -- branch on peer 'bar' In the regular workflow, replication is a "pull", importing changes from a remote peer into the local repository. A "pull" does two things: first "fetch" @@ -609,24 +609,24 @@ Now let's consider the business card. Alice has created a Git repo containing this, where ```` is the SHA1 of the commit:: -- desktop -- -- laptop -- -master: master: -remotes/laptop/master: remotes/desktop/master: +main: main: +remotes/laptop/main: remotes/desktop/main: Now she makes a change on the desktop, and commits it into the desktop repo; then she makes a different change on the laptop, and commits it into the laptop repo:: -- desktop -- -- laptop -- -master: master: -remotes/laptop/master: remotes/desktop/master: +main: main: +remotes/laptop/main: remotes/desktop/main: Now on the desktop she does ``git pull laptop``. First, the remote objects are copied across into the local repo and the remote tracking branch is updated:: -- desktop -- -- laptop -- -master: master: -remotes/laptop/master: remotes/desktop/master: +main: main: +remotes/laptop/main: remotes/desktop/main: .. note:: The repo still contains because commits and @@ -639,15 +639,15 @@ the parent commit to ```` is ````, it takes a diff between If this is successful, then you'll get a new version with a merge commit:: -- desktop -- -- laptop -- -master: master: -remotes/laptop/master: remotes/desktop/master: +main: main