On Sun, 20 Nov 2022 at 13:45, Michail Nikolaev
<michail.nikol...@gmail.com> wrote:

> If such approach looks committable - I could do more careful
> performance testing to find the best value for
> WASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS.

Nice patch.

We seem to have replaced one magic constant with another, so not sure
if this is autotuning, but I like it much better than what we had
before (i.e. better than my prev patch).

Few thoughts

1. I was surprised that you removed the limits on size and just had
the wasted work limit. If there is no read traffic that will mean we
hardly ever compress, which means the removal of xids at commit will
get slower over time.  I would prefer that we forced compression on a
regular basis, such as every time we process an XLOG_RUNNING_XACTS
message (every 15s), as well as when we hit certain size limits.

2. If there is lots of read traffic but no changes flowing, it would
also make sense to force compression when the startup process goes
idle rather than wait for the work to be wasted first.

Quick patch to add those two compression events also.

That should favour the smaller wasted work limits.

-- 
Simon Riggs                http://www.EnterpriseDB.com/

Attachment: events_that_force_compression.v1.patch
Description: Binary data

Reply via email to