On Tue, Dec 13, 2022 at 12:29 AM John Naylor
<john.nay...@enterprisedb.com> wrote:
> If the number of unfrozen heap pages is the thing we care about, perhaps 
> that, and not the total size of the table, should be the parameter that 
> drives freezing strategy?

That's not the only thing we care about, though. And to the extent we
care about it, we mostly care about the consequences of either
freezing or not freezing eagerly. Concentration of unfrozen pages in
one particular table is a lot more of a concern than the same number
of heap pages being spread out across multiple tables. Those tables
can all be independently vacuumed, and come with their own
relfrozenxid, that can be advanced independently, and are very likely
to be frozen as part of a vacuum that needed to happen anyway.

Pages become frozen pages because VACUUM freezes those pages. Same
with all-visible pages, which could in principle have been made
all-frozen instead, had VACUUM opted to do it that way back when it
processed the page. So VACUUM is not a passive, neutral observer here.
What happens over time and across multiple VACUUM operations is very
relevant. VACUUM needs to pick up where it left off last time, at
least with larger tables, where the time between VACUUMs is naturally
very high, and where each individual VACUUM has to process a huge
number of individual pages. It's not really practical to take a "wait
and see" approach with big tables.

At the very least, a given VACUUM operation has to choose its freezing
strategy based on how it expects the table will look when it's done
vacuuming the table, and how that will impact the next VACUUM against
the same table. Without that, then vacuuming an append-only table will
fall into a pattern of setting pages all-visible in one vacuum, and
then freezing those same pages all-frozen in the very next vacuum
because there are too many. Which makes little sense; we're far better
off freezing the pages at the earliest opportunity instead.

We're going to have to write a WAL record for the visibility map
anyway, so doing everything at the same time has a lot to recommend
it. Even if it turns out to be quite wrong, we may still come out
ahead in terms of absolute volume of WAL written, and especially in
terms of performance stability. To a limited extent we need to reason
about what will happen in the near future. But we also need to reason
about which kinds of mispredictions we cannot afford to make, and
which kinds are okay. Some mistakes hurt a lot more than others.

-- 
Peter Geoghegan


Reply via email to