On 2017-02-19 17:21, Erik Rijkers wrote:
0001-Use-asynchronous-connect-API-in-libpqwalreceiver-v2.patch
0002-Always-initialize-stringinfo-buffers-in-walsender-v2.patch
0003-Fix-after-trigger-execution-in-logical-replication-v2.patch
0004-Add-RENAME-support-for-PUBLICATIONs-and-SUBSCRIPTION-v2.patch
0001-Logical-replication-support-for-initial-data-copy-v4.patch
Improve readability of comment blocks
in src/backend/replication/logical/origin.c
now attached
thanks,
Erik Rijkers
--- src/backend/replication/logical/origin.c.orig 2017-02-19 16:45:28.558865304 +0100
+++ src/backend/replication/logical/origin.c 2017-02-19 17:11:09.034023021 +0100
@@ -11,31 +11,29 @@
* NOTES
*
* This file provides the following:
- * * An infrastructure to name nodes in a replication setup
- * * A facility to efficiently store and persist replication progress in an
- * efficient and durable manner.
- *
- * Replication origin consist out of a descriptive, user defined, external
- * name and a short, thus space efficient, internal 2 byte one. This split
- * exists because replication origin have to be stored in WAL and shared
+ * * Infrastructure to name nodes in a replication setup
+ * * A facility to efficiently store and persist replication progress
+ *
+ * A replication origin has a descriptive, user defined, external
+ * name and a short, internal 2 byte one. This split
+ * exists because a replication origin has to be stored in WAL and shared
* memory and long descriptors would be inefficient. For now only use 2 bytes
* for the internal id of a replication origin as it seems unlikely that there
- * soon will be more than 65k nodes in one replication setup; and using only
- * two bytes allow us to be more space efficient.
+ * soon will be more than 65k nodes in one replication setup.
*
* Replication progress is tracked in a shared memory table
- * (ReplicationStates) that's dumped to disk every checkpoint. Entries
+ * (ReplicationStates) that is dumped to disk every checkpoint. Entries
* ('slots') in this table are identified by the internal id. That's the case
* because it allows to increase replication progress during crash
* recovery. To allow doing so we store the original LSN (from the originating
* system) of a transaction in the commit record. That allows to recover the
- * precise replayed state after crash recovery; without requiring synchronous
+ * precise replayed state after crash recovery without requiring synchronous
* commits. Allowing logical replication to use asynchronous commit is
* generally good for performance, but especially important as it allows a
* single threaded replay process to keep up with a source that has multiple
* backends generating changes concurrently. For efficiency and simplicity
- * reasons a backend can setup one replication origin that's from then used as
- * the source of changes produced by the backend, until reset again.
+ * reasons a backend can setup one replication origin that is used as
+ * the source of changes produced by the backend, until it is reset again.
*
* This infrastructure is intended to be used in cooperation with logical
* decoding. When replaying from a remote system the configured origin is
@@ -45,11 +43,11 @@
* There are several levels of locking at work:
*
* * To create and drop replication origins an exclusive lock on
- * pg_replication_slot is required for the duration. That allows us to
- * safely and conflict free assign new origins using a dirty snapshot.
+ * pg_replication_slot is required. That allows us to
+ * safely and conflict-free assign new origins using a dirty snapshot.
*
- * * When creating an in-memory replication progress slot the ReplicationOirgin
- * LWLock has to be held exclusively; when iterating over the replication
+ * * When creating an in-memory replication progress slot the ReplicationOrigin
+ * LWLock has to be held exclusively. When iterating over the replication
* progress a shared lock has to be held, the same when advancing the
* replication progress of an individual backend that has not setup as the
* session's replication origin.
@@ -57,7 +55,7 @@
* * When manipulating or looking at the remote_lsn and local_lsn fields of a
* replication progress slot that slot's lwlock has to be held. That's
* primarily because we do not assume 8 byte writes (the LSN) is atomic on
- * all our platforms, but it also simplifies memory ordering concerns
+ * all our platforms, but it also simplifies memory ordering
* between the remote and local lsn. We use a lwlock instead of a spinlock
* so it's less harmful to hold the lock over a WAL write
* (c.f. AdvanceReplicationProgress).
@@ -305,7 +303,7 @@
}
}
- /* now release lock again, */
+ /* now release lock again. */
heap_close(rel, ExclusiveLock);
if (tuple == NULL)
@@ -382,7 +380,7 @@
CommandCounterIncrement();
- /* now release lock again, */
+ /* now release lock again. */