On 8/5/09 12:16 PM, "Subbiah Stalin-XCGF84" <ssubb...@motorola.com> wrote:
> We have found the problem. Apparently there was a query doing count on
> 45 million rows table run prior to the episode of slow query. Definitely
> cached data is pushed out the memory. Is there way to assign portion of
> memory to recycling purposes like in oracle, so the cached data doesn't
> get affected by queries like these.
>
> Stalin
In Postgres 8.3 and above, large sequential scans don't evict other things
from shared_buffers. But they can push things out of the OS page cache.
>
> -----Original Message-----
> From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
> Sent: Tuesday, August 04, 2009 8:57 AM
> To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
> Subject: RE: [PERFORM] Query help
>
> "Subbiah Stalin-XCGF84" <ssubb...@motorola.com> wrote:
>
>> Server has 32G memory and it's a dedicated to run PG and no other
>> application is sharing this database.
>
> It's not likely to help with this particular problem, but it's generally
> best to start from a position of letting the optimizer know what it's
> really got for resources. An effective cache size of somewhere around
> 30GB would probably be best here.
>
>> Given the nature of the ix_objects_type_lastmodified index, wondering
>> if the index requires rebuilt. I tested rebuilding it in another db,
>> and it came to 2500 pages as opposed to 38640 pages.
>
> That's pretty serious bloat. Any idea how that happened? Have you had
> long running database transaction which might have prevented normal
> maintenance from working? If not, you may need more aggressive settings
> for autovacuum. Anyway, sure, try this with the index rebuilt. If you
> don't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old
> index. (You could then rename the new index to match the old, if
> needed.)
>
>> The puzzle being why the same query with same filters, runs most of
>> times faster but at times runs 5+ mintues and it switches back to fast
>
>> mode.
>
> It is likely either that something has pushed the relevant data out of
> cache before the slow runs, or there is blocking. How big is this
> database? Can you get a list of pg_stat_activity and pg_locks during an
> episode of slow run time?
>
>> If it had used a different execution plan than the above, how do I
>> list all execution plans executed for a given SQL.
>
> It's unlikely that the slow runs are because of a different plan being
> chosen. I was wondering if a better plan might be available, but this
> one looks pretty good with your current indexes. I can think of an
> indexing change or two which *might* cause the optimizer to pick a
> different plan, but that is far from certain, and without knowing the
> cause of the occasional slow runs, it's hard to be sure that the new
> plan wouldn't get stalled for the same reasons.
>
> If it's possible to gather more data during an episode of a slow run,
> particularly the pg_stat_activity and pg_locks lists, run as the
> database superuser, it would help pin down the cause. A vmstat during
> such an episode, to compare to a "normal" one, might also be
> instructive.
>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance