Thanks a lot Kevin !!

Yes. I intended to track full table scans first to ensure that only small
tables or tables with very less pages are (as you said) getting scanned
full.

I am yet to identify slow running queries. Will surely hit back with them in
future.

Thanks
VB



On Tue, Sep 27, 2011 at 8:02 PM, Kevin Grittner <kevin.gritt...@wicourts.gov
> wrote:

> Venkat Balaji <venkat.bal...@verse.in> wrote:
>
> > I am preparing a plan to track the tables undergoing Full Table
> > Scans for most number of times.
> >
> > If i track seq_scan from the pg_stat_user_tables, will that help
> > (considering the latest analyzed ones) ?
>
> Well, yeah; but be careful not to assume that a sequential scan is
> always a bad thing.  Here's our top ten tables for sequential scans
> in a database which is performing quite well:
>
> cc=> select seq_scan, n_live_tup, relname
> cc->   from pg_stat_user_tables
> cc->   order by seq_scan desc
> cc->   limit 10;
>  seq_scan | n_live_tup |      relname
> ----------+------------+--------------------
>  81264339 |         20 | MaintCode
>  16840299 |          3 | DbTranImageStatus
>  14905181 |         18 | ControlFeature
>  11908114 |         10 | AgingBoundary
>  8789288 |         22 | CtofcTypeCode
>  7786110 |          6 | PrefCounty
>  6303959 |          9 | ProtOrderHistEvent
>  5835430 |          1 | ControlRecord
>  5466806 |          1 | ControlAccounting
>  5202028 |         12 | ProtEventOrderType
> (10 rows)
>
> You'll notice that they are all very small tables.  In all cases the
> entire heap fits in one page, so any form of indexed scan would at
> least double the number of pages visited, and slow things down.
>
> If you have queries which are not performing to expectations, your
> best bet might be to pick one of them and post it here, following
> the advice on this page:
>
> http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> -Kevin
>

Reply via email to