On Fri, Nov 21, 2025 at 9:58 AM Amit Kapila <[email protected]> wrote:
>
> On Fri, Nov 21, 2025 at 8:52 AM shveta malik <[email protected]> wrote:
> >
> > On Tue, Nov 18, 2025 at 4:07 PM Shlok Kyal <[email protected]> wrote:
> > >
> > > On Fri, 14 Nov 2025 at 14:13, Hayato Kuroda (Fujitsu)
> > > <[email protected]> wrote:
> > > >
> > > > Dear Shlok,
> > > >
> > > > Thanks for updating the patch. Few more comments.
> > > >
> > > > > > > > I’m not sure if this has already been discussed; I couldn’t
> > > > > > > > find any
> > > > > > > > mention of it in the thread. Why don’t we persist
> > > > > > > > 'slot_sync_skip_reason' (it is outside of
> > > > > > > > ReplicationSlotPersistentData)? If a slot wasn’t synced during
> > > > > > > > the
> > > > > > > > last cycle and the server restarts, it would be helpful to know
> > > > > > > > the
> > > > > > > > reason it wasn’t synced prior to the node restart.
> > > > > > > >
> > > > > Actually I did not think in this direction. I think it will be useful
> > > > > to persist 'slot_sync_skip_reason'. I have made the change for the
> > > > > same in the latest patch.
> > > >
> > > > Hmm, I'm wondering it should be written on the disk. Other attributes
> > > > on the disk
> > > > are essential to decode or replicate changes correctly, but sync status
> > > > is not
> > > > used for the purpose. Personally considered, slot sync would re-start
> > > > soon after
> > > > the reboot so that it is OK to start with empty. How about others?
> > > >
> > > > If we want to serialize the info, we should do further tasks:
> > > > - update SLOT_VERSION
> > > > - make the slot dirty then SaveSlotToPath() when the status is updated.
> > > >
> > > I agree with your point. Slot synchronization will restart shortly
> > > after a reboot, so it seems reasonable to begin with an empty state
> > > rather than persisting slot_sync_skip_reason.
> > > For now, I’ve updated the patch so that slot_sync_skip_reason is no
> > > longer persisted; its initialization is kept outside of
> > > ReplicationSlotPersistentData. I’d also like to hear what others
> > > think.
> > >
> >
> > Users may even use an API to synchronize the slots rather than
> > slotsync worker. In that case synchronization won't start immediately
> > after server-restart.
> >
>
> But I think after restart in most cases, the slot will be created
> fresh as we persist the slot for the first time only when sync is
> successful. Now, when the standby has not flushed the WAL
> corresponding to remote_lsn (SS_SKIP_WAL_NOT_FLUSHED), the slotsync
> can be skipped even for persisted slots but that should be rare and we
> anyway won't be able to persist the slot_skip reason in other cases as
> slot itself won't be persisted by that time. So, I feel keeping the
> slot_sync_skip_reason in memory is sufficient.
>
Okay, makes sense.
A few comments on 001:
1)
+ slots, but may (if leftover from a promotedstandby) contain a
timestamp.
promotedstandby --> promoted standby
2)
+ s.slotsync_skip_count,
+ s.last_slotsync_skip_at,
Shall we rename last_slotsync_skip_at to slotsync_last_skip_at. That
way all slotsync related stats columns will have same prefix.
3)
+#include "utils/injection_point.h"
+ INJECTION_POINT("slot-sync-skip", NULL);
I think we can move both to patch 002 as these are needed for test alone.
4)
+
+ /*
+ * If found_consistent_snapshot is not NULL, a true value means
+ * the slot synchronization was successful, while a false value
+ * means it was skipped (see
+ * update_and_persist_local_synced_slot()). If
+ * found_consistent_snapshot is NULL, no such check exists, so the
+ * stats can be updated directly.
+ */
+ if (!found_consistent_snapshot || *found_consistent_snapshot)
+ update_slotsync_skip_stats(SS_SKIP_NONE);
I see that when 'found_consistent_snapshot' is true we update stats
here but when it is false, we update stats in the caller. Also for
'remote_slot_precedes' case, we update stats to SS_SKIP_REMOTE_BEHIND
here itself. I think for 'SS_SKIP_NO_CONSISTENT_SNAPSHOT' as well, we
should update stats here instead of caller.
We can do this:
update_local_synced_slot()
{
skip_reason = none;
if (remote is behind)
skip_reason = SS_SKIP_REMOTE_BEHIND;
if (found_consistent_snapshot && (*found_consistent_snapshot == false))
skip_reason = SS_SKIP_NO_CONSISTENT_SNAPSHOT;
--Later in this function, when syncing is done:
update_slotsync_skip_stats(skip_reason)
}
5)
+ if (synced)
+ {
+ ReplicationSlotAcquire(NameStr(slot->data.name), true, false);
+
+ if (slot->data.invalidated == RS_INVAL_NONE)
+ update_slotsync_skip_stats(SS_SKIP_WAL_NOT_FLUSHED);
+
+ ReplicationSlotRelease();
+ }
Shall we check 'slot->data.invalidated' along with 'synced'
condition. That way, no need to acquire or release the slot if it is
invalidated. We can fetch 'invalidated' under the same SpinLock
itself.
6)
+ SS_SKIP_WAL_NOT_FLUSHED, /* Standby did not flush the wal coresponding
+ * to confirmed flush on remote slot */
on --> of
coresponding --> corresponding
thanks
Shveta