On 21 March 2015 at 23:34, Roland Dunn <roland.d...@gmail.com> wrote:

>
> If we did add more RAM, would it be the effective_cache_size setting
> that we would alter? Is there a way to force PG to load a particular
> table into RAM? If so, is it actually a good idea?
>

Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?

Pay special attention to "Buffers: shared read=NNN" and "Buffers: shared
hit=NNN", if you're not reading any buffers between runs then the pages are
in the PostgreSQL shared buffers. By the looks of your config you have 10GB
of these. On the other hand if you're getting buffer reads, then they're
either coming from disk, or from the OS cache. PostgreSQL won't really know
the difference.

If you're not getting any buffer reads and it's still slow, then the
problem is not I/O

Just for fun... What happens if you stick the 50 UUIDs in some table,
analyze it, then perform a join between the 2 tables, using IN() or
EXISTS()... Is that any faster?

Also how well does it perform with: set enable_bitmapscan = off; ?

Regards

David Rowley

Reply via email to