Hi folks,

I was wondering on pg_lesslog integration on our architecture based on wal
shipping using pg_standby (thanks Simon) as recovery tool. The main target
is to reduce wal files accumulation on "master" host (thus consumming disk
space) in case of slave unavailability.
As far as I've read the source of pg_stanby, WAL size is checked in
CustomizableNextWALFileReady function, allowing customization of file size
check. Actually, the "legacy" way WAL is checked, is "if file size is normal
WAL file size then we've got a supposed-good WAL". Thus, we cannot use
compressed log files out of "pg_lesslog", we need to "decompress" it before
processing, including a "pre-processing" stage and/or command.

There are three ways to do such :
  - first, we could write a script on master to compress, rsync then
decompress (on slave) file, but the idea to call a remote processing on
slave seems to be quite hazardous to me (in some way it may lead to strange
situations)
  - second, we could rsync compressed log to a directory, use a local
"daemon" on slave to watch a directory and decompress files to final place
(pg_standby WAL source directory)
  - third, we could add some functionnalities to pg_standby allowing to
pre-process WAL file (thus customizing CustomizableNextWALFileReady
function?), this might be usefull for other issues or use cases than
pg_lesslog but it applies quite good to it :)

What are your thoughts for each of these points?

Thanks,

-- 
Jean-Christophe Arnu

Reply via email to