On 2017-03-06 11:27, Petr Jelinek wrote:

0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch +
0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch+
0003-Prevent-snapshot-builder-xmin-from-going-backwards.patch  +
0004-Fix-xl_running_xacts-usage-in-snapshot-builder.patch      +
0005-Skip-unnecessary-snapshot-builds.patch                    +
0001-Logical-replication-support-for-initial-data-copy-v6.patch

I use three different machines (2 desktop, 1 server) to test logical replication, and all three have now at least once failed to correctly synchronise a pgbench session (amidst many succesful runs, of course)

I attach an output-file from the test-program, with the 2 logfiles (master+replica) of the failed run. The outputfile (out_20170307_1613.txt) contains the output of 5 runs of pgbench_derail2.sh. The first run failed, the next 4 were ok.

But that's probably not very useful; perhaps is pg_waldump more useful? From what moment, or leading up to what moment, or period, is a pg_waldump(s) useful? I can run it from the script, repeatedly, and only keep the dumped files when things go awry. Would that make sense?

Any other ideas welcome.


thanks,

Erik Rijkers



Attachment: 20170307_1613.tar.bz2
Description: BZip2 compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to