On 5/15/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:

On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:
> "Y Sidhu" <[EMAIL PROTECTED]> writes:
> > it may be table fragmentation. What kind of tables? We have 2 of them
which
> > experience lots of adds and deletes only. No updates. So a typical day
> > experiences record adds a few dozen times on the order of 2.5 million.
And
> > deletes once daily. Each of these tables has about 3 btree indexes.
>
> With an arrangement like that you should vacuum once daily, shortly
> after the deletes --- there's really no point in doing it on any other
> schedule.  Note "shortly" not "immediately" --- you want to be sure that
> any transactions old enough to see the deleted rows have ended.

Also, think about ways you might avoid the deletes altogether. Could you
do a truncate instead? Could you use partitioning? If you are using
deletes then look at CLUSTERing the table some time after the deletes
(but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).

To answer your original question, a way to take a look at how bloated
your tables are would be to ANALYZE, divide reltuples by relpages from
pg_class (gives how many rows per page you have) and compare that to 8k
/ average row size. The average row size for table rows would be the sum
of avg_width from pg_stats for the table + 24 bytes overhead. For
indexes, it would be the sum of avg_width for all fields in the index
plus some overhead (8 bytes, I think).

An even simpler alternative would be to install contrib/pgstattuple and
use the pgstattuple function, though IIRC that does read the entire
relation from disk.
--
Jim Nasby                                      [EMAIL PROTECTED]
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)


Here are my results:

a.  SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;

I get 66

b.  SELECT (8000/(sum(avg_width)+24)) as  table_stat FROM pg_stats;

I get 1


Yudhvir

Reply via email to