On Tue, 30 Jul 2019 21:30:45 +0900 (Tokyo Standard Time)
Kyotaro Horiguchi <horikyota....@gmail.com> wrote:

> Thanks for reviewing!
> 
> At Thu, 27 Jun 2019 16:22:56 +0200, Jehan-Guillaume de Rorthais
> <j...@dalibo.com> wrote in <20190627162256.4f4872b8@firost>
> > Hi all,
> > 
> > Being interested by this feature, I did a patch review.
> > 
> > This features adds the GUC "max_slot_wal_keep_size". This is the maximum
> > amount of WAL that can be kept in "pg_wal" by active slots.
> > 
> > If the amount of WAL is superior to this limit, the slot is deactivated and
> > its status (new filed in pg_replication_slot) is set as "lost".  
> 
> This patch doesn't deactivate walsender. A walsender stops by
> itself when it finds that it cannot continue ongoing replication.

Sure, sorry for the confusion, I realize my sentence is ambiguous. Thanks for
the clarification.

[...]

> > In "pg_replication_slots" view, the new "wal_status" field is misleading.
> > Consider this sentence and the related behavior from documentation
> > (catalogs.sgml):
> > 
> >   <literal>keeping</literal> means that some of them are to be removed by
> > the next checkpoint.
> > 
> > "keeping" appears when the current checkpoint will delete some WAL further
> > than "current_lsn - max_slot_wal_keep_size", but still required by at least
> > one slot. As some WAL required by some slots will be deleted quite soon,
> > probably before anyone can react, "keeping" status is misleading here. We
> > are already in the red zone.  
> 
> It may be "losing", which would be less misleading.

Indeed, "loosing" is a better match for this state.

However, what's the point of this state from the admin point of view? In various
situation, the admin will have no time to react immediately and fix whatever
could help.

How useful is this specific state?

> > I would expect this "wal_status" to be:
> > 
> > - streaming: slot lag between 0 and "max_wal_size"
> > - keeping: slot lag between "max_wal_size" and "max_slot_wal_keep_size". the
> >   slot actually protect some WALs from being deleted
> > - lost: slot lag superior of max_slot_wal_keep_size. The slot couldn't
> > protect some WAL from deletion  
> 
> I agree that comparing to max_wal_size is meaningful. The revised
> version behaves as that.

The v16-0006 patch doesn't apply anymore because of commit 709d003fbd. Here is
the fix:

  --- a/src/backend/access/transam/xlogreader.c
  +++ b/src/backend/access/transam/xlogreader.c
  @@ -304,7 +304,7
  -       XLByteToSeg(targetPagePtr, targetSegNo, state->wal_segment_size);
  +       XLByteToSeg(targetPagePtr, targetSegNo, state->segcxt.ws_segsize);

I suppose you might have more refactoring to do in regard with Alvaro's
review. 

I confirm the new patch behaves correctly in my tests in regard with the
"wal_status" field.

> > Documentation follows with:
> > 
> >   The last two states are seen only when max_slot_wal_keep_size is
> >   non-negative
> > 
> > This is true with the current behavior. However, if "keeping" is set as
> > soon as te slot lag is superior than "max_wal_size", this status could be
> > useful even with "max_slot_wal_keep_size = -1". As soon as a slot is
> > stacking WALs that should have been removed by previous checkpoint, it
> > "keeps" them.
> 
> I revised the documentation that way. Both
> view-pg-replication-slots.html and
> runtime-config-replication.html are reworded.

+      <entry>Availability of WAL records claimed by this
+      slot. <literal>streaming</literal>, <literal>keeping</literal>,

Slots are keeping WALs, not WAL records. Shouldn't it be "Availability of WAL
files claimed by this slot"?

+      <literal>streaming</literal> means that the claimed records are
+      available within max_wal_size. <literal>keeping</literal> means

I wonder if streaming is the appropriate name here. The WALs required might be
available for streaming, but the slot not active, thus not "streaming". What
about merging with the "active" field, in the same fashion as
pg_stat_activity.state? We would have an enum "pg_replication_slots.state" with
the following states:

* inactive: non active slot
* active: activated, required WAL within max_wal_size
* keeping: activated, max_wal_size is exceeded but still held by replication
  slots or wal_keep_segments.
* lost: some WAL are definitely lost

Thoughts?

[...]
> > * "remain" should be NULL if "max_slot_wal_keep_size=-1 or if the slot isn't
> >   active  
> 
> The revised  version shows the following statuses.
> 
>    streaming / NULL             max_slot_wal_keep_size is -1
>    unkown    / NULL             mswks >= 0 and restart_lsn is invalid
>    <status>  / <bytes>          elsewise

Works for me.

> > * the "lost" status should be a definitive status
> > * it seems related, but maybe the "wal_status" should be set as "lost"
> >   only when the slot has been deactivate ?  
> 
> Agreed. While replication is active, if required segments seems
> to be lost once, delayed walreceiver ack can advance restart_lsn
> to "safe" zone later. So, in the revised version, if the segment
> for restart_lsn has been removed, GetLsnAvailablity() returns
> "losing" if walsender is active and "lost" if not.

ok.

> > * logs should warn about a failing slot as soon as it is effectively
> >   deactivated, not before.  
> 
> Agreed. Slots on which walsender is running are exlucded from the
> output of ReplicationSlotsEnumerateBehnds. As theresult the "some
> replcation slots lost.." is emitted after related walsender
> stops.

Once a slot lost WALs and has been deactivated, the following message appears
during every checkpoints:

  WARNING:  some replication slots have lost required WAL segments
  DETAIL:  Slot slot_limit_st lost 177 segment(s)

I wonder if this is useful to show these messages for slots that were already
dead before this checkpoint?

Regards,


Reply via email to