Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.

(Apologies for the length of this post; but it was unavoidable due to
this being the 1st review of a very large large 1700-line patch)

======
src/backend/catalog/pg_subscription.c

1. GetSubscriptionSequences

+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */

Is that comment right? The palloc seems to be done in
CacheMemoryContext, not in the current context.

~

2.
The code is very similar to the other function
GetSubscriptionRelations(). In fact I did not understand how the 2
functions know what they are returning:

E.g. how does GetSubscriptionRelations not return sequences too?
E.g. how does GetSubscriptionSequences not return relations too?

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:
nitpick - put the sequence logic *after* the relations logic because
that is the order that seems used everywhere else.

~~~

3. AlterSubscription_refresh

- logicalrep_worker_stop(sub->oid, relid);
+ /* Stop the worker if relation kind is not sequence*/
+ if (relkind != RELKIND_SEQUENCE)
+ logicalrep_worker_stop(sub->oid, relid);

Can you give more reasons in the comment why skip the stop for sequence worker?

~

nitpick - period and space in the comment

~~~

4.
  for (off = 0; off < remove_rel_len; off++)
  {
  if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
- sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+ get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
  {
Would this new logic perhaps be better written as:

if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
  continue;

~~~

AlterSubscription_refreshsequences:
nitpick - rename AlterSubscription_refresh_sequences

~
5.
There is significant code overlap between the existing
AlterSubscription_refresh and the new function
AlterSubscription_refreshsequences. I wonder if it is better to try to
combine the logic and just pass another parameter to
AlterSubscription_refresh saying to update the existing sequences if
necessary. Particularly since the AlterSubscription_refresh is already
tweaked to work for sequences. Of course, the resulting combined
function would be large and complex, but maybe that would still be
better than having giant slabs of nearly identical cut/paste code.
Thoughts?

~~~

check_publications_origin:
nitpick - move variable declarations
~~~

fetch_sequence_list:
nitpick - change /tablelist/seqlist/
nitpick - tweak the spaces of the SQL for alignment (similar to
fetch_table_list)

~

6.
+    " WHERE s.pubname IN (");
+ first = true;
+ foreach_ptr(String, pubname, publications)
+ {
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname->sval));
+ }
+ appendStringInfoChar(&cmd, ')');

IMO this can be written much better by using get_publications_str()
function to do all this list work.

======
src/backend/replication/logical/launcher.c

7. logicalrep_worker_find

/*
 * Walks the workers array and searches for one that matches given
 * subscription id and relid.
 *
 * We are only interested in the leader apply worker or table sync worker.
 */

The above function comment (not in the patch 0003) is stale because
this AFAICT this is also going to return sequence workers if it finds
one.

~~~

8. logicalrep_sequence_sync_worker_find

+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)

There are other similar functions for walking the workers array to
search for a worker. Instead of having different functions for
different cases, wouldn't it be cleaner to combine these into a single
function, where you pass a parameter (e.g. a mask of worker types that
you are interested in finding)?

~

nitpick - declare a for loop variable 'i'

~~~

9. logicalrep_apply_worker_find

+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)

All the other find* functions assume the lock is already held
(Assert(LWLockHeldByMe(LogicalRepWorkerLock));). But this one is
different. IMO it might be better to acquire the lock in the caller to
make all the find* functions look the same. Anyway, that will help to
combine everything into 1 "find" worker as suggested in the previous
review comment #8.

~

nitpick - declare a for loop variable 'i'
nitpick - removed unnecessary parens in condition.

~~~

10. logicalrep_worker_launch

/*----------
* Sanity checks:
* - must be valid worker type
* - tablesync workers are only ones to have relid
* - parallel apply worker is the only kind of subworker
*/

The above code-comment (not in the 0003 patch) seems stale. This
should now also mention sequence sync workers, right?

~~~

11.
- Assert(is_tablesync_worker == OidIsValid(relid));
+ Assert(is_tablesync_worker == OidIsValid(relid) ||
is_sequencesync_worker == OidIsValid(relid));

IIUC there is only a single sequence sync worker for handling all the
sequences. So, what does the 'relid' actually mean here when there are
multiple sequences?

~~~

12. logicalrep_seqsyncworker_failuretime

+/*
+ * Set the sequence sync worker failure time
+ *
+ * Called on sequence sync worker failure exit.
+ */

12a.
The comment should be improved to make it more clear that the failure
time of the sync worker information is stored with the *apply* worker.
See also other review comments in this post about this area -- perhaps
all this can be removed?

~

12b.
Curious if this had to be a separate exit handler or if may this could
have been handled by the existing logicalrep_worker_onexit handler.
See also other review comments int this post about this area --
perhaps all this can be removed?

======
.../replication/logical/sequencesync.c

13. fetch_sequence_data

13a.
The function comment has no explanation of what exactly the returned
value means. It seems like it is what you will assign as 'last_value'
on the subscriber-side.

~

13b.
Some of the table functions like this are called like
'fetch_remote_table_info()'. Maybe it is better to do similar here
(e.g. include the word "remote" in the function name).

~

14.
The reason for the addition logic "(last_value + log_cnt)" is not
obvious. I am guessing it might be related to code from
'nextval_internal' (fetch = log = fetch + SEQ_LOG_VALS;) but it is
complicated. It is unfortunate that the field 'log_cnt' seems hardly
commented anywhere at all.

Also, I am not 100% sure if I trust the logic in the first place. The
caller of this function is doing:
sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
/* sets the sequence with sequence_value */
SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

Won't that mean you can get to a situation where subscriber-side
result of lastval('s') can be *ahead* from lastval('s') on the
publisher? That doesn't seem good.

~~~

copy_sequence:

nitpick - ERROR message. Reword "for table..." to be more like the 2nd
error message immediately below.
nitpick - /RelationGetRelationName(rel)/relname/
nitpick - moved the Assert for 'relkind' to be nearer the assignment.

~

15.
+ /*
+ * Logical replication of sequences is based on decoding WAL records,
+ * describing the "next" state of the sequence the current state in the
+ * relfilenode is yet to reach. But during the initial sync we read the
+ * current state, so we need to reconstruct the WAL record logged when we
+ * started the current batch of sequence values.
+ *
+ * Otherwise we might get duplicate values (on subscriber) if we failed
+ * over right after the sync.
+ */
+ sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+ /* sets the sequence with sequence_value */
+ SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

(This is related to some earlier review comment #14 above). IMO all
this tricky commentary belongs in the function header of
"fetch_sequence_data", where it should be describing that function's
return value.

~~~

LogicalRepSyncSequences:
nitpick - declare void param
nitpick indentation
nitpick - wrapping
nitpick - /sequencerel/sequence_rel/
nitpick - blank lines

~

16.
+ if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid,
false) == RLS_ENABLED)
+ ereport(ERROR,
+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("user \"%s\" cannot replicate into relation with row-level
security enabled: \"%s\"",
+ GetUserNameFromId(GetUserId(), true),
+ RelationGetRelationName(sequencerel)));

This should be reworded to refer to sequences instead of relations. Maybe like:
user \"%s\" cannot replicate into sequence \"%s\" with row-level
security enabled"

~

17.
The Calculations involving the BATCH size seem a bit tricky.
e.g. in 1st place it is doing: (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
e.g. in 2nd place it is doing: (next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0)

Maybe this batch logic can be simplified somehow using a bool variable
for the calculation?

Also, where does the number 100 come from? Why not 1000? Why not 10?
Why have batching at all? Maybe there should be some comment to
describe the reason and the chosen value.

~

18.
+ next_seq = curr_seq + 1;
+ if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+ {
+ /* LOG all the sequences synchronized during current batch. */
+ int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+ for (; i <= curr_seq; i++)
+ {
+ SubscriptionRelState *done_seq;
+ done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+ ereport(LOG,
+ errmsg("logical replication synchronization for subscription \"%s\",
sequence \"%s\" has finished",
+    get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+ }
+
+ CommitTransactionCommand();
+ }
+
+ curr_seq++;

I feel this batching logic needs more comments describing what you are
doing here.

~~~

SequencesyncWorkerMain:
nitpick - spaces in the function comment

======
src/backend/replication/logical/tablesync.c

19. finish_sync_worker

-finish_sync_worker(void)
+finish_sync_worker(bool istable)

IMO, for better readability (here and in the callers) the new
parameter should be the enum LogicalRepWorkerType. Since we have that
enum, might as well make good use of it.

~

nitpick - /sequences synchronization worker/sequence synchronization worker/
nitpick - comment tweak

~

20.
+ char relkind;
+
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind == RELKIND_SEQUENCE)
+ continue;

I am wondering is it possible to put the relkind check can come
*before* the TX code here, because in case there are *only* sequences
then maybe every would be skipped and there would have been no need
for any TX at all in the first place.

~~~

process_syncing_sequences_for_apply:

nitpick - fix typo and slight reword function header comment. Also
/last start time/last failure time/
nitpick - tweak comments
nitpick - blank lines

~

21.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+ continue;

Wondering (like in review comment #20) if it is possible to swap those
because maybe there was no reason for any TX if the other condition
would always continue.

~~~

22.
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;

It seems to me that storing 'sequencesync_failure_time' logic may be
unnecessarily complicated. Can't the same "throttling" be achieved by
storing the synchronization worker 'start time' instead of 'fail
time', in which case then you won't have to mess around with
considering if the sync worker failed or just exited normally etc? You
might also be able to remove all the
logicalrep_seqsyncworker_failuretime() exit handler code.

~~~

process_syncing_tables:
nitpick - let's process tables before sequences (because all other
code is generally in this same order)
nitpick - removed some excessive comments about code that is not
supposed to happen

======
src/backend/replication/logical/worker.c

should_apply_changes_for_rel:
nitpick - IMO there were excessive comments for something that is not
going to happen

~~~

23. InitializeLogRepWorker

/*
 * Common initialization for leader apply worker, parallel apply worker and
 * tablesync worker.
 *
 * Initialize the database connection, in-memory subscription and necessary
 * config options.
 */

That comment (not part of patch 0003) is stale; it should now mention
the sequence sync worker as well, right?

~

nitpick - Tweak plural /sequences sync worker/sequence sync worker/

~~~

24. SetupApplyOrSyncWorker

/* Common function to setup the leader apply or tablesync worker. */

That comment (not part of patch 0003) is stale; it should now mention
the sequence sync worker as well, right?

======
src/include/nodes/parsenodes.h

25.
  ALTER_SUBSCRIPTION_ADD_PUBLICATION,
  ALTER_SUBSCRIPTION_DROP_PUBLICATION,
  ALTER_SUBSCRIPTION_REFRESH,
+ ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,

For consistency with your new enum it would be better to also change
the existing enum name ALTER_SUBSCRIPTION_REFRESH ==>
ALTER_SUBSCRIPTION_REFRESH_PUBLICATION.

======
src/include/replication/logicalworker.h

nitpick - IMO should change the function name
/SequencesyncWorkerMain/SequenceSyncWorkerMain/, and in passing make
the same improvement to the TablesyncWorkerMain function name.

======
src/include/replication/worker_internal.h

26.
  WORKERTYPE_PARALLEL_APPLY,
+ WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;

AFAIK the enum order should not matter here so it would be better to
put the WORKERTYPE_SEQUENCESYNC directly after the
WORKERTYPE_TABLESYNC to keep the similar things together.

~

nitpick - IMO change the macro name
/isSequencesyncWorker/isSequenceSyncWorker/, and in passing make the
same improvement to the isTablesyncWorker macro name.

======
src/test/subscription/t/034_sequences.pl

nitpick - Copyright year
nitpick - Modify the "Create subscriber node" comment for consistency
nitpick - Modify comments slightly for the setup structure parts
nitpick - Add or remove various blank lines
nitpick - Since you have sequences 's2' and 's3', IMO it makes more
sense to call the original sequence 's1' instead of just 's'
nitpick - Rearrange so the CREATE PUBLICATION/SUBSCRIPTION can stay together
nitpick - Modified some comment styles to clearly delineate all the
main "TEST" scenarios
nitpick - In the REFRESH PUBLICATION test the create new sequence and
update existing can be combined (like you do in a later test).
nitpick - Changed some of the test messages for REFRESH PUBLICATION
which seemed wrong
nitpick - Added another test for 's1' in REFRESH PUBLICATION SEQUENCES
nitpick - Changed some of the test messages for REFRESH PUBLICATION
SEQUENCES which seemed wrong

~

27.
IIUC the preferred practice is to give these test object names a
'regress_' prefix.

~

28.
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');

28a.
Maybe it is better to say "SELECT last_value, log_cnt, is_called"
instead of "SELECT *" ?
Note - this is in a couple of places.

~

28b.
Can you explain why the expected sequence value its 132, because
AFAICT you only called nextval('s') 100 times, so why isn't it 100?
My guess is that it seems to be related to code in "nextval_internal"
(fetch = log = fetch + SEQ_LOG_VALS;) but it kind of defies
expectations of the test, so if it really is correct then it needs
commentary.

Actually, I found other regression test code that deals with this:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time, so just test for the expected range
SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM
foo_seq_new;

Do you have to do something similar? Or is this a bug? See my other
review comments for function fetch_sequence_data in sequencesync.c

======
99.
Please also see the attached diffs patch which implements any nitpicks
mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia
diff --git a/src/backend/commands/subscriptioncmds.c 
b/src/backend/commands/subscriptioncmds.c
index f7e51da..b9eaf2b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -771,37 +771,37 @@ CreateSubscription(ParseState *pstate, 
CreateSubscriptionStmt *stmt,
                         */
                        table_state = opts.copy_data ? SUBREL_STATE_INIT : 
SUBREL_STATE_READY;
 
-                       /* Add the sequences in init state */
-                       sequences = fetch_sequence_list(wrconn, publications);
-                       foreach_ptr(RangeVar, rv, sequences)
+                       /*
+                        * Get the table list from publisher and build local 
table status
+                        * info.
+                        */
+                       tables = fetch_table_list(wrconn, publications);
+                       foreach(lc, tables)
                        {
+                               RangeVar   *rv = (RangeVar *) lfirst(lc);
                                Oid                     relid;
 
                                relid = RangeVarGetRelid(rv, AccessShareLock, 
false);
 
                                /* Check for supported relkind. */
                                CheckSubscriptionRelkind(get_rel_relkind(relid),
-                                                                               
rv->schemaname, rv->relname);
+                                                                               
 rv->schemaname, rv->relname);
 
                                AddSubscriptionRelState(subid, relid, 
table_state,
                                                                                
InvalidXLogRecPtr, true);
                        }
 
-                       /*
-                        * Get the table list from publisher and build local 
table status
-                        * info.
-                        */
-                       tables = fetch_table_list(wrconn, publications);
-                       foreach(lc, tables)
+                       /* Add the sequences in init state */
+                       sequences = fetch_sequence_list(wrconn, publications);
+                       foreach_ptr(RangeVar, rv, sequences)
                        {
-                               RangeVar   *rv = (RangeVar *) lfirst(lc);
                                Oid                     relid;
 
                                relid = RangeVarGetRelid(rv, AccessShareLock, 
false);
 
                                /* Check for supported relkind. */
                                CheckSubscriptionRelkind(get_rel_relkind(relid),
-                                                                               
 rv->schemaname, rv->relname);
+                                                                               
rv->schemaname, rv->relname);
 
                                AddSubscriptionRelState(subid, relid, 
table_state,
                                                                                
InvalidXLogRecPtr, true);
@@ -1028,7 +1028,7 @@ AlterSubscription_refresh(Subscription *sub, bool 
copy_data,
 
                                RemoveSubscriptionRel(sub->oid, relid);
 
-                               /* Stop the worker if relation kind is not 
sequence*/
+                               /* Stop the worker if relation kind is not 
sequence. */
                                if (relkind != RELKIND_SEQUENCE)
                                        logicalrep_worker_stop(sub->oid, relid);
 
@@ -1106,7 +1106,7 @@ AlterSubscription_refresh(Subscription *sub, bool 
copy_data,
  * Refresh the sequences data of the subscription.
  */
 static void
-AlterSubscription_refreshsequences(Subscription *sub)
+AlterSubscription_refresh_sequences(Subscription *sub)
 {
        char       *err;
        List       *pubseq_names = NIL;
@@ -1574,7 +1574,7 @@ AlterSubscription(ParseState *pstate, 
AlterSubscriptionStmt *stmt,
 
                                PreventInTransactionBlock(isTopLevel, "ALTER 
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-                               AlterSubscription_refreshsequences(sub);
+                               AlterSubscription_refresh_sequences(sub);
 
                                break;
                        }
@@ -2235,13 +2235,11 @@ check_publications_origin(WalReceiverConn *wrconn, List 
*publications,
        for (i = 0; i < subrel_count; i++)
        {
                Oid                     relid = subrel_local_oids[i];
-               char       *schemaname;
-               char       *tablename;
 
                if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
                {
-                       schemaname = 
get_namespace_name(get_rel_namespace(relid));
-                       tablename = get_rel_name(relid);
+                       char *schemaname = 
get_namespace_name(get_rel_namespace(relid));
+                       char *tablename = get_rel_name(relid);
 
                        appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND 
C.relname = '%s')\n",
                                                         schemaname, tablename);
@@ -2427,14 +2425,14 @@ fetch_sequence_list(WalReceiverConn *wrconn, List 
*publications)
        TupleTableSlot *slot;
        Oid                     tableRow[2] = {TEXTOID, TEXTOID};
        bool            first;
-       List       *tablelist = NIL;
+       List       *seqlist = NIL;
 
        Assert(list_length(publications) > 0);
 
        initStringInfo(&cmd);
        appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, 
s.sequencename\n"
-                                                  "  FROM 
pg_catalog.pg_publication_sequences s\n"
-                                                  " WHERE s.pubname IN (");
+                                                  "      FROM 
pg_catalog.pg_publication_sequences s\n"
+                                                  "      WHERE s.pubname IN 
(");
        first = true;
        foreach_ptr(String, pubname, publications)
        {
@@ -2470,7 +2468,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List 
*publications)
                Assert(!isnull);
 
                rv = makeRangeVar(nspname, relname, -1);
-               tablelist = lappend(tablelist, rv);
+               seqlist = lappend(seqlist, rv);
 
                ExecClearTuple(slot);
        }
@@ -2478,7 +2476,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List 
*publications)
 
        walrcv_clear_result(res);
 
-       return tablelist;
+       return seqlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c 
b/src/backend/postmaster/bgworker.c
index 6770e26..f8dd93a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,10 +131,10 @@ static const struct
                "ParallelApplyWorkerMain", ParallelApplyWorkerMain
        },
        {
-               "TablesyncWorkerMain", TablesyncWorkerMain
+               "TableSyncWorkerMain", TableSyncWorkerMain
        },
        {
-               "SequencesyncWorkerMain", SequencesyncWorkerMain
+               "SequenceSyncWorkerMain", SequenceSyncWorkerMain
        }
 };
 
diff --git a/src/backend/replication/logical/launcher.c 
b/src/backend/replication/logical/launcher.c
index 2451eca..4ab470f 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -276,18 +276,17 @@ logicalrep_worker_find(Oid subid, Oid relid, bool 
only_running)
 LogicalRepWorker *
 logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
 {
-       int                     i;
        LogicalRepWorker *res = NULL;
 
        Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
        /* Search for attached worker for a given subscription id. */
-       for (i = 0; i < max_logical_replication_workers; i++)
+       for (int i = 0; i < max_logical_replication_workers; i++)
        {
                LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
                /* Skip non sequence sync workers. */
-               if (!isSequencesyncWorker(w))
+               if (!isSequenceSyncWorker(w))
                        continue;
 
                if (w->in_use && w->subid == subid && (only_running && w->proc))
@@ -331,15 +330,13 @@ logicalrep_workers_find(Oid subid, bool only_running)
 static LogicalRepWorker *
 logicalrep_apply_worker_find(Oid subid, bool only_running)
 {
-       int                     i;
-
        LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-       for (i = 0; i < max_logical_replication_workers; i++)
+       for (int i = 0; i < max_logical_replication_workers; i++)
        {
                LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-               if (isApplyWorker(w) && w->subid == subid && (only_running && 
w->proc))
+               if (isApplyWorker(w) && w->subid == subid && only_running && 
w->proc)
                {
                        LWLockRelease(LogicalRepWorkerLock);
                        return w;
@@ -545,7 +542,7 @@ retry:
                        break;
 
                case WORKERTYPE_TABLESYNC:
-                       snprintf(bgw.bgw_function_name, BGW_MAXLEN, 
"TablesyncWorkerMain");
+                       snprintf(bgw.bgw_function_name, BGW_MAXLEN, 
"TableSyncWorkerMain");
                        snprintf(bgw.bgw_name, BGW_MAXLEN,
                                         "logical replication tablesync worker 
for subscription %u sync %u",
                                         subid,
@@ -554,7 +551,7 @@ retry:
                        break;
 
                case WORKERTYPE_SEQUENCESYNC:
-                       snprintf(bgw.bgw_function_name, BGW_MAXLEN, 
"SequencesyncWorkerMain");
+                       snprintf(bgw.bgw_function_name, BGW_MAXLEN, 
"SequenceSyncWorkerMain");
                        snprintf(bgw.bgw_name, BGW_MAXLEN,
                                         "logical replication sequencesync 
worker for subscription %u",
                                         subid);
@@ -941,7 +938,7 @@ logicalrep_sync_worker_count(Oid subid)
        {
                LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-               if (isTablesyncWorker(w) && w->subid == subid)
+               if (isTableSyncWorker(w) && w->subid == subid)
                        res++;
        }
 
@@ -1392,7 +1389,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
                worker_pid = worker.proc->pid;
 
                values[0] = ObjectIdGetDatum(worker.subid);
-               if (isTablesyncWorker(&worker))
+               if (isTableSyncWorker(&worker))
                        values[1] = ObjectIdGetDatum(worker.relid);
                else
                        nulls[1] = true;
diff --git a/src/backend/replication/logical/sequencesync.c 
b/src/backend/replication/logical/sequencesync.c
index 92980e8..0ba8c1a 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -109,8 +109,8 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
        if (res->status != WALRCV_OK_TUPLES)
                ereport(ERROR,
                                (errcode(ERRCODE_CONNECTION_FAILURE),
-                                errmsg("could not fetch sequence info for 
table \"%s.%s\" from publisher: %s",
-                                               nspname, 
RelationGetRelationName(rel), res->err)));
+                                errmsg("sequence \"%s.%s\" info could not be 
fetched from publisher: %s",
+                                               nspname, relname, res->err)));
 
        slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
        if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
@@ -123,12 +123,11 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
        Assert(!isnull);
        relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
        Assert(!isnull);
+       Assert(relkind == RELKIND_SEQUENCE);
 
        ExecDropSingleTupleTableSlot(slot);
        walrcv_clear_result(res);
 
-       Assert(relkind == RELKIND_SEQUENCE);
-
        /*
         * Logical replication of sequences is based on decoding WAL records,
         * describing the "next" state of the sequence the current state in the
@@ -152,12 +151,12 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
  * Start syncing the sequences in the sync worker.
  */
 static void
-LogicalRepSyncSequences()
+LogicalRepSyncSequences(void)
 {
        char       *err;
        bool            must_use_password;
-       List *sequences;
-       char       slotname[NAMEDATALEN];
+       List       *sequences;
+       char            slotname[NAMEDATALEN];
        AclResult       aclresult;
        UserContext ucxt;
        bool            run_as_owner  = false;
@@ -169,8 +168,7 @@ LogicalRepSyncSequences()
 
        /* Get the sequences that should be synchronized. */
        StartTransactionCommand();
-       sequences = GetSubscriptionSequences(subid,
-                                                                               
 SUBREL_STATE_INIT);
+       sequences = GetSubscriptionSequences(subid, SUBREL_STATE_INIT);
        CommitTransactionCommand();
 
        /* Is the use of a password mandatory? */
@@ -197,7 +195,7 @@ LogicalRepSyncSequences()
        seq_count = list_length(sequences);
        foreach_ptr(SubscriptionRelState, seqinfo, sequences)
        {
-               Relation        sequencerel;
+               Relation        sequence_rel;
                XLogRecPtr      sequence_lsn;
                int                     next_seq;
 
@@ -206,7 +204,7 @@ LogicalRepSyncSequences()
                if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
                        StartTransactionCommand();
 
-               sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+               sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
 
                /*
                 * Make sure that the copy command runs as the sequence owner, 
unless the
@@ -214,18 +212,18 @@ LogicalRepSyncSequences()
                 */
                run_as_owner = MySubscription->runasowner;
                if (!run_as_owner)
-                       SwitchToUntrustedUser(sequencerel->rd_rel->relowner, 
&ucxt);
+                       SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, 
&ucxt);
 
                /*
                 * Check that our sequence sync worker has permission to insert 
into the
                 * target sequence.
                 */
-               aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), 
GetUserId(),
+               aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), 
GetUserId(),
                                                                        
ACL_INSERT);
                if (aclresult != ACLCHECK_OK)
                        aclcheck_error(aclresult,
-                                               
get_relkind_objtype(sequencerel->rd_rel->relkind),
-                                               
RelationGetRelationName(sequencerel));
+                                               
get_relkind_objtype(sequence_rel->rd_rel->relkind),
+                                               
RelationGetRelationName(sequence_rel));
 
                /*
                 * COPY FROM does not honor RLS policies.  That is not a 
problem for
@@ -234,28 +232,30 @@ LogicalRepSyncSequences()
                 * circumvent RLS.  Disallow logical replication into RLS 
enabled
                 * relations for such roles.
                 */
-               if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, 
false) == RLS_ENABLED)
+               if (check_enable_rls(RelationGetRelid(sequence_rel), 
InvalidOid, false) == RLS_ENABLED)
                        ereport(ERROR,
                                        errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
                                        errmsg("user \"%s\" cannot replicate 
into relation with row-level security enabled: \"%s\"",
                                                        
GetUserNameFromId(GetUserId(), true),
-                                                       
RelationGetRelationName(sequencerel)));
+                                                       
RelationGetRelationName(sequence_rel)));
 
-               sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, 
sequencerel);
+               sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, 
sequence_rel);
 
                UpdateSubscriptionRelState(subid, seqinfo->relid, 
SUBREL_STATE_READY,
                                                                   
sequence_lsn);
 
-               table_close(sequencerel, NoLock);
+               table_close(sequence_rel, NoLock);
 
                next_seq = curr_seq + 1;
                if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || 
next_seq == seq_count)
                {
                        /* LOG all the sequences synchronized during current 
batch. */
                        int i = curr_seq - (curr_seq % 
MAX_SEQUENCES_SYNC_PER_BATCH);
+
                        for (; i <= curr_seq; i++)
                        {
                                SubscriptionRelState *done_seq;
+
                                done_seq = (SubscriptionRelState *) 
lfirst(list_nth_cell(sequences, i));
                                ereport(LOG,
                                                errmsg("logical replication 
synchronization for subscription \"%s\", sequence \"%s\" has finished",
@@ -274,7 +274,7 @@ LogicalRepSyncSequences()
 
 /*
  * Execute the initial sync with error handling. Disable the subscription,
- * if it's required.
+ * if required.
  *
  * Allocate the slot name in long-lived context on return. Note that we don't
  * handle FATAL errors which are probably because of system resource error and
@@ -310,9 +310,9 @@ start_sequence_sync()
        PG_END_TRY();
 }
 
-/* Logical Replication Sequencesync worker entry point */
+/* Logical Replication sequence sync worker entry point */
 void
-SequencesyncWorkerMain(Datum main_arg)
+SequenceSyncWorkerMain(Datum main_arg)
 {
        int                     worker_slot = DatumGetInt32(main_arg);
 
diff --git a/src/backend/replication/logical/tablesync.c 
b/src/backend/replication/logical/tablesync.c
index a15b6cd..01f5a85 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -164,14 +164,14 @@ finish_sync_worker(bool istable)
                                           
get_rel_name(MyLogicalRepWorker->relid)));
        else
                ereport(LOG,
-                               errmsg("logical replication sequences 
synchronization worker for subscription \"%s\" has finished",
+                               errmsg("logical replication sequence 
synchronization worker for subscription \"%s\" has finished",
                                           MySubscription->name));
        CommitTransactionCommand();
 
        /* Find the leader apply worker and signal it. */
        logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
-       /* No need to set the failure time in case of a clean exit */
+       /* No need to set the sequence failure time when it is a clean exit */
        if (!istable)
                cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 
0);
 
@@ -683,13 +683,13 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
  * synchronization for them.
  *
  * If there is a sequence synchronization worker running already, no need to
- * start a sequence synchronization in this case. The existing sequence
- * sync worker will synchronize the sequences. If there are still any sequences
- * to be synced after the sequence sync worker exited, then we new sequence
- * sync worker can be started in the next iteration. To prevent starting the
- * sequence sync worker at a high frequency after a failure, we store its last
- * start time. We start the sync worker for the same relation after waiting
- * at least wal_retrieve_retry_interval.
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
  */
 static void
 process_syncing_sequences_for_apply()
@@ -702,7 +702,7 @@ process_syncing_sequences_for_apply()
        FetchTableStates(&started_tx);
 
        /*
-        * Start sequence sync worker if there is no sequence sync worker 
running.
+        * Start sequence sync worker if there is not one already.
         */
        foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
        {
@@ -720,22 +720,19 @@ process_syncing_sequences_for_apply()
                        continue;
 
                /*
-                * Check if there is a sequence worker running?
+                * Check if there is a sequence worker already running?
                 */
                LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
                syncworker = 
logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
                                                                                
                                        true);
-               /*
-                * If there is a sequence sync worker, the sequence sync worker
-                * will handle sync of this sequence.
-                */
                if (syncworker)
                {
                        /* Now safe to release the LWLock */
                        LWLockRelease(LogicalRepWorkerLock);
                        break;
                }
+
                else
                {
                        /*
@@ -750,13 +747,12 @@ process_syncing_sequences_for_apply()
 
                        /*
                         * If there are free sync worker slot(s), start a new 
sequence sync
-                        * worker to sync the sequences and break from the 
loop, as this
-                        * sequence sync worker will take care of synchronizing 
all the
-                        * sequences that are in init state.
+                        * worker, and break from the loop.
                         */
                        if (nsyncworkers < max_sync_workers_per_subscription)
                        {
                                TimestampTz now = GetCurrentTimestamp();
+
                                if 
(!MyLogicalRepWorker->sequencesync_failure_time ||
                                        
TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
                                                                                
           now, wal_retrieve_retry_interval))
@@ -804,14 +800,13 @@ process_syncing_tables(XLogRecPtr current_lsn)
                        break;
 
                case WORKERTYPE_APPLY:
-                       process_syncing_sequences_for_apply();
                        process_syncing_tables_for_apply(current_lsn);
+                       process_syncing_sequences_for_apply();
                        break;
 
-               /* Sequence sync is not expected to come here */
                case WORKERTYPE_SEQUENCESYNC:
+                       /* Should never happen. */
                        Assert(0);
-                       /* not reached, here to make compiler happy */
                        break;
 
                case WORKERTYPE_UNKNOWN:
@@ -1837,7 +1832,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
        int                     worker_slot = DatumGetInt32(main_arg);
 
diff --git a/src/backend/replication/logical/worker.c 
b/src/backend/replication/logical/worker.c
index d0b0715..63dff38 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,10 +489,9 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
                                        (rel->state == SUBREL_STATE_SYNCDONE &&
                                         rel->statelsn <= remote_final_lsn));
 
-               /* Sequence sync is not expected to come here */
                case WORKERTYPE_SEQUENCESYNC:
+                       /* Should never happen. */
                        Assert(0);
-                       /* not reached, here to make compiler happy */
                        break;
 
                case WORKERTYPE_UNKNOWN:
@@ -4639,7 +4638,7 @@ InitializeLogRepWorker(void)
                                                
get_rel_name(MyLogicalRepWorker->relid))));
        else if (am_sequencesync_worker())
                ereport(LOG,
-                               (errmsg("logical replication sequences 
synchronization worker for subscription \"%s\" has started",
+                               (errmsg("logical replication sequence 
synchronization worker for subscription \"%s\" has started",
                                                MySubscription->name)));
        else
                ereport(LOG,
@@ -4689,7 +4688,7 @@ SetupApplyOrSyncWorker(int worker_slot)
                                                                  
invalidate_syncing_table_states,
                                                                  (Datum) 0);
 
-       if (isSequencesyncWorker(MyLogicalRepWorker))
+       if (isSequenceSyncWorker(MyLogicalRepWorker))
                before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 
0);
 }
 
diff --git a/src/include/replication/logicalworker.h 
b/src/include/replication/logicalworker.h
index f380c1b..47a3326 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,8 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t 
ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
-extern void SequencesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h 
b/src/include/replication/worker_internal.h
index 3701b15..502ecef 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -338,21 +338,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
                                                           (worker)->type == 
WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
                                                                           
(worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
                                                                   
(worker)->type == WORKERTYPE_TABLESYNC)
-#define isSequencesyncWorker(worker) ((worker)->in_use && \
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
                                                                          
(worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-       return isTablesyncWorker(MyLogicalRepWorker);
+       return isTableSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
 am_sequencesync_worker(void)
 {
-       return isSequencesyncWorker(MyLogicalRepWorker);
+       return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/034_sequences.pl 
b/src/test/subscription/t/034_sequences.pl
index 94bf83a..7272efa 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -1,5 +1,5 @@
 
-# Copyright (c) 2021, PostgreSQL Global Development Group
+# Copyright (c) 2024, PostgreSQL Global Development Group
 
 # This tests that sequences are synced correctly to the subscriber
 use strict;
@@ -13,101 +13,109 @@ my $node_publisher = 
PostgreSQL::Test::Cluster->new('publisher');
 $node_publisher->init(allows_streaming => 'logical');
 $node_publisher->start;
 
-# Create subscriber node
+# Initialize subscriber node
 my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
 $node_subscriber->init(allows_streaming => 'logical');
 $node_subscriber->start;
 
-# Create some preexisting content on publisher
+# Setup structure on the publisher
 my $ddl = qq(
        CREATE TABLE seq_test (v BIGINT);
-       CREATE SEQUENCE s;
+       CREATE SEQUENCE s1;
 );
-
-# Setup structure on the publisher
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Create some the same structure on subscriber, and an extra sequence that
+# Setup the same structure on the subscriber, plus some extra sequences that
 # we'll create on the publisher later
 $ddl = qq(
        CREATE TABLE seq_test (v BIGINT);
-       CREATE SEQUENCE s;
+       CREATE SEQUENCE s1;
        CREATE SEQUENCE s2;
        CREATE SEQUENCE s3;
 );
-
 $node_subscriber->safe_psql('postgres', $ddl);
 
-# Setup logical replication
-my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
-$node_publisher->safe_psql('postgres',
-       "CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
-
 # Insert initial test data
 $node_publisher->safe_psql(
        'postgres', qq(
        -- generate a number of values using the sequence
-       INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+       INSERT INTO seq_test SELECT nextval('s1') FROM generate_series(1,100);
 ));
 
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+       "CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
 $node_subscriber->safe_psql('postgres',
        "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' 
PUBLICATION seq_pub"
 );
 
-# Wait for initial sync to finish as well
+# Wait for initial sync to finish
 my $synced_query =
   "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN 
('r');";
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+#
+# TEST:
+#
+# Check the initial data on subscriber
+#
 my $result = $node_subscriber->safe_psql(
        'postgres', qq(
-       SELECT * FROM s;
+       SELECT * FROM s1;
 ));
-
 is($result, '132|0|t', 'initial test data replicated');
 
-# create a new sequence, it should be synced
+#
+# TEST:
+#
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+#
+
+# create a new sequence 's2', and update existing sequence 's1'
 $node_publisher->safe_psql(
        'postgres', qq(
        CREATE SEQUENCE s2;
        INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
-));
 
-# changes to existing sequences should not be synced
-$node_publisher->safe_psql(
-       'postgres', qq(
-       INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+    -- Existing sequence
+       INSERT INTO seq_test SELECT nextval('s1') FROM generate_series(1,100);
 ));
 
-# Refresh publication after create a new sequence and updating existing
-# sequence.
+# do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
        ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
 ));
-
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+# check - existing sequence is not synced
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
-       SELECT * FROM s;
+       SELECT * FROM s1;
 ));
+is($result, '132|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
 
-is($result, '132|0|t', 'initial test data replicated');
-
+# check - newly published sequence is synced
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
        SELECT * FROM s2;
 ));
+is($result, '132|0|t', 'REFRESH PUBLICATION will sync newly published 
sequence');
 
-is($result, '132|0|t', 'initial test data replicated');
+#
+# TEST:
+#
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+#
 
-# Changes of both new and existing sequence should be synced after REFRESH
-# PUBLICATION SEQUENCES.
+# create a new sequence 's3', and update the existing sequence 's2'
 $node_publisher->safe_psql(
        'postgres', qq(
        CREATE SEQUENCE s3;
@@ -117,8 +125,7 @@ $node_publisher->safe_psql(
        INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
 ));
 
-# Refresh publication sequences after create new sequence and updating existing
-# sequence.
+# do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
        ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
@@ -127,19 +134,23 @@ $result = $node_subscriber->safe_psql(
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+# check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+       'postgres', qq(
+       SELECT * FROM s1;
+));
+is($result, '231|0|t', 'REFRESH PUBLICATION SEQUENCES will sync existing 
sequences');
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
        SELECT * FROM s2;
 ));
+is($result, '231|0|t', 'REFRESH PUBLICATION SEQUENCES will sync existing 
sequences');
 
-is($result, '231|0|t', 'initial test data replicated');
-
+# check - newly published sequence is synced
 $result = $node_subscriber->safe_psql(
        'postgres', qq(
        SELECT * FROM s3;
 ));
-
-is($result, '132|0|t', 'initial test data replicated');
+is($result, '132|0|t', 'REFRESH PUBLICATION SEQUENCES will sync newly 
published sequence');
 
 done_testing();

Reply via email to