Yes. I am looking for the justified full table scans.

If bigger tables are getting scanned, I would like to know %age rows scanned
against %age rows as the output.

If the query needs 80% of the rows as the output, then a full table scan is
always better.

I believe there is a possibility for this in Postgres. I think we can get
this using pg_stat_user_table, pg_statio_user_tables and pg_stats.

I will post the calculation once it i get it.

Thanks
VB

On Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <ring...@ringerc.id.au> wrote:

> On 09/28/2011 12:26 AM, Venkat Balaji wrote:
>
>> Thanks a lot Kevin !!
>>
>> Yes. I intended to track full table scans first to ensure that only
>> small tables or tables with very less pages are (as you said) getting
>> scanned full.
>>
>
> It can also be best to do a full table scan of a big table for some
> queries. If the query needs to touch all the data in a table - for example,
> for an aggregate - then the query will often complete fastest and with less
> disk use by using a sequential scan.
>
> I guess what you'd really want to know is to find out about queries that do
> seqscans to match relatively small fractions of the total tuples scanned, ie
> low-selectivity seqscans. I'm not sure whether or not it's possible to
> gather this data with PostgreSQL's current level of stats detail.
>
> --
> Craig Ringer
>

Reply via email to