reshke commented on issue #1566:
URL: https://github.com/apache/cloudberry/issues/1566#issuecomment-3904949084
2ebb8416cc4 conflicts
```
reshke@yezzey-cbdb-bench:~/cloudberry$ git diff doc/src/sgml/ref/pg_dump.sgml
diff --cc doc/src/sgml/ref/pg_dump.sgml
index ca6ff8cdc65,b6b66bf068c..00000000000
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@@ -371,12 -371,12 +371,18 @@@ PostgreSQL documentatio
</para>
<para>
Requesting exclusive locks on database objects while running a
parallel dump could
++<<<<<<< HEAD
+ cause the dump to fail. The reason is that the
<application>pg_dump</application> coordinator process
+ requests shared locks on the objects that the worker processes are
going to dump later
+ in order to
++=======
+ cause the dump to fail. The reason is that the
<application>pg_dump</application> leader process
+ requests shared locks (<link linkend="locking-tables">ACCESS
SHARE</link>) on the
+ objects that the worker processes are going to dump later in order
to
++>>>>>>> 2ebb8416cc4 (Clarify that pg_dump takes ACCESS SHARE lock)
make sure that nobody deletes them and makes them go away while
the dump is running.
If another client then requests an exclusive lock on a table, that
lock will not be
- granted but will be queued waiting for the shared lock of the
leader process to be
+ granted but will be queued waiting for the shared lock of the
coordinator process to be
released. Consequently any other access to the table will not be
granted either and
will queue after the exclusive lock request. This includes the
worker process trying
to dump the table. Without any precautions this would be a classic
deadlock situation.
reshke@yezzey-cbdb-bench:~/cloudberry$
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]