On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:
> "Y Sidhu" <[EMAIL PROTECTED]> writes:
> > it may be table fragmentation. What kind of tables? We have 2 of them which
> > experience lots of adds and deletes only. No updates. So a typical day
> > experiences record adds a few dozen times on the order of 2.5 million. And
> > deletes once daily. Each of these tables has about 3 btree indexes.
> 
> With an arrangement like that you should vacuum once daily, shortly
> after the deletes --- there's really no point in doing it on any other
> schedule.  Note "shortly" not "immediately" --- you want to be sure that
> any transactions old enough to see the deleted rows have ended.

Also, think about ways you might avoid the deletes altogether. Could you
do a truncate instead? Could you use partitioning? If you are using
deletes then look at CLUSTERing the table some time after the deletes
(but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).

To answer your original question, a way to take a look at how bloated
your tables are would be to ANALYZE, divide reltuples by relpages from
pg_class (gives how many rows per page you have) and compare that to 8k
/ average row size. The average row size for table rows would be the sum
of avg_width from pg_stats for the table + 24 bytes overhead. For
indexes, it would be the sum of avg_width for all fields in the index
plus some overhead (8 bytes, I think).

An even simpler alternative would be to install contrib/pgstattuple and
use the pgstattuple function, though IIRC that does read the entire
relation from disk.
-- 
Jim Nasby                                      [EMAIL PROTECTED]
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to