> (1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000;
>
> OUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):
>
> Limit (cost=0.00..4661.02 rows=4000 width=16) (actual
> time=0.009..1.036 rows=4000 loops=1)
>Buffers: shared hit=42
>-> Seq Scan o
Thanks for replies. More detail and data below:
Table: "lookup"
uuid: type uuid. not null. plain storage.
datetime_stamp: type bigint. not null. plain storage.
harvest_date_stamp: type bigint. not null. plain storage.
state: type smallint. not null. plain storage.
Indexes:
"lookup_pkey" PRIM
On 21 March 2015 at 23:34, Roland Dunn wrote:
>
> If we did add more RAM, would it be the effective_cache_size setting
> that we would alter? Is there a way to force PG to load a particular
> table into RAM? If so, is it actually a good idea?
>
Have you had a look at EXPLAIN (ANALYZE, BUFFERS) f