On 09/28/2011 12:26 AM, Venkat Balaji wrote:
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only
small tables or tables with very less pages are (as you said) getting
scanned full.
It can also be best to do a full table scan of a big table for some
queries. If the query needs to touch all the data in a table - for
example, for an aggregate - then the query will often complete fastest
and with less disk use by using a sequential scan.
I guess what you'd really want to know is to find out about queries that
do seqscans to match relatively small fractions of the total tuples
scanned, ie low-selectivity seqscans. I'm not sure whether or not it's
possible to gather this data with PostgreSQL's current level of stats
detail.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance