We've had our PostgreSQL 8.1.4 installation configured to autovacuum
since January, but I suspect it might not be doing anything. Perhaps I
can determine what happens through the log files? Is there a summary of
which "when to log" settings in postgresql.conf should be set to get at
least table-level messages about yes/no decisions? The only message I
see now is very terse, indicating that autovacuum does run:
LOG: autovacuum: processing database "dc_prod"
I suspect there's a problem because there appears to be 78% overhead in
the database size, whereas I would expect 10-15% based on what I've
read. This is not good for some Seq Scan operations on large tables
(the root problem I'm starting to tackle). Notes:
[+] Last week I restored a production backup into my
development sandbox with a "psql -f", then ran a
"vacuumdb -a z" on it. After that, I noticed that the
size of the production database is 78% larger than
development, using "select pg_database_size('dc_prod')"
in pgAdmin3. Prod is 5.9GB, but my Dev is 3.3GB.
[+] The worst table has about 2.7x overhead, according to
"select relpages/reltuples from pg_class" queries.
Here are the relevant postgresql.conf settings in production. I can't
speak to their suitability, but I think they should reclaim some unused
space for reuse.
#stats_start_collector = on
#stats_block_level = off
stats_row_level = on
#stats_reset_on_server_start = off
autovacuum = on
autovacuum_naptime = 360
autovacuum_vacuum_threshold = 1000
autovacuum_analyze_threshold = 500
autovacuum_vacuum_scale_factor = 0.04
autovacuum_analyze_scale_factor = 0.02
autovacuum_vacuum_cost_delay = 10
autovacuum_vacuum_cost_limit = -1
I was suspicious that the stat_row_level might not work because
stat_block_level is off. But I see pg_stat_user_tables.n_tup_ins,
pg_stat_user_tables.n_tup_upd and pg_stat_user_tables.n_tup_del are all
increasing (slowly but surely).
Thanks,
David Crane
http://www.donorschoose.org <http://www.donorschoose.org>
Teachers Ask. You Choose. Students Learn.