On Wed, Jun 5, 2024 at 7:29 PM Dilip Kumar wrote:
>
> On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila wrote:
> >
> > Can you share the use case of "earliest_timestamp_wins" resolution
> > method? It seems after the initial update on the local node, it will
> > never allow remote update to succeed
On Fri, Apr 26, 2024 at 1:10 PM Daniel Gustafsson wrote:
>
> > On 22 Mar 2024, at 11:42, Nisha Moond wrote:
>
> > Here is the v4 patch with changes required in slotfuncs.c and slotsync.c
> > files.
>
> - errmsg("could not connect to the primary server:
On Mon, May 27, 2024 at 11:19 AM shveta malik wrote:
>
> On Sat, May 25, 2024 at 2:39 AM Tomas Vondra
> wrote:
> >
> > On 5/23/24 08:36, shveta malik wrote:
> > > Hello hackers,
> > >
> > > Please find the proposal for Conflict Detection and Resolution (CDR)
> > > for Logical replication.
> > >
Did performance test on optimization patch
(v2-0001-optimize-the-slot-advancement.patch). Please find the
results:
Setup:
- One primary node with 100 failover-enabled logical slots
- 20 DBs, each having 5 failover-enabled logical replication slots
- One physical standby node with
On Wed, Mar 13, 2024 at 11:16 AM Peter Smith wrote:
>
> FYI -- some more code has been pushed since this patch was last
> updated. AFAICT perhaps you'll want to update this patch again for the
> following new connection messages on HEAD:
>
> - slotfuncs.c [1]
> - slotsync.c [2]
>
> --
>
I did performance tests for the v99 patch w.r.t. wait time analysis.
As this patch is introducing a wait for standby before sending changes
to a subscriber, at the primary node, logged time at the start and end
of the XLogSendLogical() call (which eventually calls
WalSndWaitForWal()) and
We conducted stress testing for the patch with a setup of one primary
node with 100 tables and five subscribers, each having 20
subscriptions. Then created three physical standbys syncing the
logical replication slots from the primary node.
All 100 slots were successfully synced on all three
> AFAIK some recent commits patches (e,g [1] for the "slot sync"
> development) have created some more cases of "could not connect..."
> messages. So, you might need to enhance your patch to deal with any
> new ones in the latest HEAD.
>
> ==
> [1]
>
On Fri, Jan 12, 2024 at 7:06 PM Aleksander Alekseev
wrote:
>
> Hi,
>
> Thanks for the patch.
>
> > Due to this behavior, it is not possible to add a test to show the
> > error message as it is done for CREATE SUBSCRIPTION.
> > Let me know if you think there is another way to add this test.
>
> I
>
> ~~
>
> BTW, while experimenting with the bad connection ALTER I also tried
> setting 'disable_on_error' like below:
>
> ALTER SUBSCRIPTION sub4 SET (disable_on_error);
> ALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';
>
> ...but here the subscription did not become DISABLED as I expected it
>
A review on v62-006: failover-ready validation steps doc -
+ Next, check that the logical replication slots identified above exist on
+ the standby server. This step can be skipped if
+ standby_slot_names has been correctly configured.
+
+test_standby=# SELECT bool_and(synced
Thanks for reviewing, please find my response inline.
On Wed, Jan 17, 2024 at 4:56 AM Peter Smith wrote:
>
> On Sat, Jan 13, 2024 at 12:36 AM Aleksander Alekseev
> wrote:
> >
> > Hi,
> >
> > Thanks for the patch.
> >
> > > Due to this behavior, it is not possible to add a test to show the
> > >
Thanks for the review. Attached v2 patch with suggested changes.
Please find my response inline.
On Fri, Jan 12, 2024 at 8:20 AM Peter Smith wrote:
>
> Thanks for the patch! Here are a couple of review comments for it.
>
> ==
> src/backend/commands/subscriptioncmds.c
>
> 1.
> @@ -742,7
Hi Hackers,
Various sections of the code utilize the walrcv_connect() function,
employed by various processes such as walreceiver, logical replication
apply worker, etc., to establish connections with other hosts.
Presently, in case of connection failures, the error message lacks
information
Thanks for working on it. I tested the patch on my system and it
resolved the issue with commands running -V (version check).
As you mentioned, I am also still seeing intermittent errors even with
the patch as below -
in 'pg_upgrade/002_pg_upgrade' -
# Running: pg_upgrade --no-sync -d
Review for v47 patch -
(1)
When we try to create a subscription on standby using a synced slot
that is in 'r' sync_state, the subscription will be created at the
subscriber, and on standby, two actions will take place -
(i) As copy_data is true by default, it will switch the failover
state
A review on v45 patch:
If one creates a logical slot with failover=true as -
select pg_create_logical_replication_slot('logical_slot','pgoutput',
false, true, true);
Then, uses the existing logical slot while creating a subscription -
postgres=# create subscription sub4 connection
On Fri, Dec 1, 2023 at 5:40 PM Nisha Moond wrote:
>
> Review for v41 patch.
>
> 1.
> ==
> src/backend/utils/misc/postgresql.conf.sample
>
> +#enable_syncslot = on # enables slot synchronization on the physical
> standby from the primary
>
> enable_sy
Review for v41 patch.
1.
==
src/backend/utils/misc/postgresql.conf.sample
+#enable_syncslot = on # enables slot synchronization on the physical
standby from the primary
enable_syncslot is disabled by default, so, it should be 'off' here.
~~~
2.
IIUC, the slotsyncworker's connection to the
On Fri, Nov 3, 2023 at 5:02 PM Nisha Moond wrote:
>
> On Thu, Nov 2, 2023 at 11:52 AM Kyotaro Horiguchi
> wrote:
> >
> > At Tue, 31 Oct 2023 18:11:48 +0530, vignesh C wrote in
> > > Few others are also facing this problem with similar code like in:
> > &
y command \"%s\"\n", cmd);
…
…
And the log looks like -
cmd output - postgres (PostgreSQL) 17devel
no data was returned by command
""D:/Project/pg1/postgres/tmp_install/bin/pg_controldata" -V"
check for "D:/Project/pg1/postgres/tmp_install/bin/pg_controldata"
failed: cannot execute
Failure, exiting
Attached test result log for the same - "regress_log_003_logical_slots".
Thanks,
Nisha Moond
regress_log_003_logical_slots
Description: Binary data
ot; -V"
check for "D:/Project/pg1/postgres/tmp_install/bin/pg_dump" failed:
cannot execute
Failure, exiting
[16:08:50.444](7.434s) not ok 10 - run of pg_upgrade of old cluster
Has anyone come across this issue? I am not sure what is the issue here.
Any thoughts?
Thanks,
Nisha Moond
regress_log_003_logical_slots
Description: Binary data
22 matches
Mail list logo