This is review of 0003 patch. Overall the patch looks good and helps
understand the decoding logic better.

+                                          data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1
log_cnt: 0 is_called:0
+ COMMIT

Looking at this output, I am wondering how would this patch work with DDL
replication. I should have noticed this earlier, sorry. A sequence DDL has two
parts, changes to the catalogs and changes to the data file. Support for
replicating the data file changes is added by these patches. The catalog
changes will need to be supported by DDL replication patch. When applying the
DDL changes, there are two ways 1. just apply the catalog changes and let the
support added here apply the data changes. 2. Apply both the changes. If the
second route is chosen, all the "transactional" decoding and application
support added by this patch will need to be ripped out. That will make the
"transactional" field in the protocol will become useless. It has potential to
be waste bandwidth in future.

OTOH, I feel that waiting for the DDL repliation patch set to be commtted will
cause this patchset to be delayed for an unknown duration. That's undesirable
too.

One solution I see is to use Storage RMID WAL again. While decoding it we send
a message to the subscriber telling it that a new relfilenode is being
allocated to a sequence. The subscriber too then allocates new relfilenode to
the sequence. The sequence data changes are decoded without "transactional"
flag; but they are decoded as transactional or non-transactional using the same
logic as the current patch-set. The subscriber will always apply these changes
to the reflilenode associated with the sequence at that point in time. This
would have the same effect as the current patch-set. But then there is
potential that the DDL replication patchset will render the Storage decoding
useless. So not an option. But anyway, I will leave this as a comment as an
alternative thought and discarded. Also this might trigger a better idea.

What do you think?

+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;

The third implicit nextval won't be logged so whether subtransaction is rolled
back or committed, it won't have much effect on the decoding. Adding
subtransaction around the first INSERT itself might be useful to test that the
subtransaction rollback does not rollback the sequence changes.

After adding {'include_sequences', false} to the calls to
pg_logical_slot_get_changes() in other tests, the SQL statement has grown
beyond 80 characters. Need to split it into multiple lines.

         }
+        else if (strcmp(elem->defname, "include-sequences") == 0)
+        {
+
+            if (elem->arg == NULL)
+                data->include_sequences = false;

By default inlclude_sequences = true. Shouldn't then it be set to true here?

After looking at the option processing code in
pg_logical_slot_get_changes_guts(), it looks like an argument can never be
NULL. But I see we have checks for NULL values of other arguments so it's ok to
keep a NULL check here.

I will look at 0004 next.

-- 
Best Wishes,
Ashutosh Bapat


Reply via email to