On Thu, 2006-11-02 at 16:50 -0500, Tom Lane wrote:
> I wrote:
> > * pg_clog is truncated according to the oldest pg_database.datvacuumxid.

> Shortening the freeze horizon will reduce the size
> that pg_clog occupies just *after* that happens, but we're still going
> to see pg_clog bloating up to something close to 256MB before autovacuum
> kicks in. 

Well, by default a Windows install is about 80MB, plus 7x 16MB WAL gives
nearly 200MB, so we're talking about the doubling the basic on-disk
footprint for every user if we let that happen.

> This wasn't a problem in the pre-8.2 logic because we ignored
> non-connectable databases while determining the global minimum
> datvacuumxid, but it's a real problem now.
> 
> Seems like either we go back to ignoring non-connectable databases
> (with the risks that entails), or adopt some more-aggressive policy
> for launching autovacuums on them, or give up the idea of keeping
> pg_clog small.  A more-aggressive policy seems like the best option,
> but I'm not entirely sure what it should look like.  Maybe force
> autovacuum when age(datvacuumxid) exceeds twice the freeze horizon,
> or some such?  Comments?

Given many users are individual PCs, or at least stand-alone servers not
in constant use, then I think more aggressive vacuuming makes sense as a
way to keep clog smaller. In many situations the time lost through
continually virus scanning databases will be more than if we do a more
regular autovacuum, so we shouldn't really worry about that.

Sounds like we need a GUC for those who don't care about 256MB though,
but may care about autovacuum switching in at bad moments.

Also, that solution doesn't square with the current autovacuum defaults:
We should set autovacuum on by default, with
autovacuum_vacuum_cost_delay = 10 by default.

-- 
  Simon Riggs             
  EnterpriseDB   http://www.enterprisedb.com



---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to