Russell Keane <russell.ke...@inps.co.uk> wrote:
 
> "log_min_duration_statement";"1ms"
 
> "shared_buffers";"32MB"
> "work_mem";"1MB"
 
Those are pretty low values even for a 4GB machine.  I suggest the
following changes and additions, based on the fact that you seem to
have the active portion of the database fully cached.
 
shared_buffers = '160MB'
work_mem = '8MB'
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.03
effective_cache_size = '2GB'
 
> Explain analyse with both indexes present but without the limit
> (uses the correct index):
 
> "Total runtime: 0.092 ms"
 
Part of problem is that it thinks it will find a matching row fairly
quickly, and having done so using the index it chose will mean it is
the *right* row.  The problem is that there are no matching rows, so
it has to scan the entire index.  More fine-grained statistics
*might* help.  If other techniques don't help, you can rewrite the
query slightly to create an optimization fence, but that should be a
last resort.  I agree with Robert that if you have a lot of queries
that select on "incoming" and/or "inactive", a conditional index
(with a WHERE clause in its definition) is likely to be very
helpful.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to