I've got about 44GB of data in a few hundred production databases. I'm using PG 
8.1.4, but upgrading today (even to the latest 8.1) is not an option. I know, I 
know. I wish it were, and it's slated here for q2, but I cannot even apply 
maintenance patches without a full testing cycle. 

My auto-vac parameters are: 
autovacuum = on # enable autovacuum subprocess? 
autovacuum_naptime = 3 # time between autovacuum runs, in secs 
autovacuum_vacuum_threshold = 400 # min # of tuple updates before vacuum 
autovacuum_analyze_threshold = 200 # min # of tuple updates before analyze 
autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before 
autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before 
#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for 
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for autovac 

and auto-vacuum is running. 

My problem is that each Saturday at midnight, I have to start a vacuumdb -f -z 
-a or my pg_clog dir never clears out. 

The manual vacuum takes quite some time and impacts weekend customers. 

So, my questions are: 

a) Is the manual vacuum needed for performance reasons, or is auto-vac 
sufficient? 
b) How do my settings look? 
c) Is there a way that the clogs get cleared via autovac, would a full vac of 
just template1/template0 (if that last is possible) do it? 

Reply via email to