On Mon, Jan 12, 2026 at 2:54 AM Andrey Borodin <[email protected]> wrote:
>
>
>
> > On 14 Jul 2025, at 23:22, Andrey Borodin <[email protected]> wrote:
> >
> > PFA rebased version.
>
> Here's a rebased version. Also I fixed a problem of possible wrong memory 
> context used for allocating compression buffer.

Thanks for updating the patch!


With the v5 patch, I see the following compiler warning:

    xlog.c:726:1: warning: unused function 'XLogGetRecordTotalLen'
[-Wunused-function]
      726 | XLogGetRecordTotalLen(XLogRecord *record)
          | ^~~~~~~~~~~~~~~~~~~~~

This seems to happen because XLogGetRecordTotalLen() is only used under
WAL_DEBUG. If that's correct, its definition should probably also be guarded
by WAL_DEBUG to avoid the warning.


cfbot reported a regression test failure with v5. Could you please
look into that?
https://cirrus-ci.com/build/5635306839343104


When I ran pg_waldump on WAL generated with wal_compression=pglz and
wal_compression_threshold=32, I got this error:

    pg_waldump: error: error in WAL record at 0/02183BE0: could not
decompress record at 0/2183D10

Isn't this a bug?


+ XLogEnsureCompressionBuffer(MaxSizeOfXLogRecordBlockHeader + BLCKSZ);

XLogEnsureCompressionBuffer() is now called every time XLogRegisterBuffer(),
XLogRegisterBlock(), XLogRegisterData(), and XLogRegisterBufData() are invoked.
Why is that necessary? Wouldn't it be sufficient to
call XLogEnsureCompressionBuffer() once, with the total length,
just before XLogCompressRdt(rdt)?


v5 removes the ability to compress only full-page images, which is the current
wal_compression behavior. That may be disappointing for users who rely on
the existing semantics. Would it make more sense to keep the current behavior
and add a new feature to compress entire WAL records whose size exceeds
the specified threshold?

Regards,


-- 
Fujii Masao


Reply via email to