On 15 August 2015 at 00:09, Deepak Balasubramanyam <deepak.b...@gmail.com>
wrote:

> Hi,
>
> I have a table (20 million rows) in Postgresql 9.4 that contains a bigint
> id as the primary key and another column that contains jsonb data. Queries
> run on this table look like so...
>
> ------------
> ## Query
> ------------
> select ... from table
> WHERE table.column ->'item'->> 'name' = 'value'
> ------------
>
> I'd like to make an effort to get Postgresql to keep all data available in
> this table and any index on this table in memory. This would ensure that
> sequence or index scans made on the data are fairly fast.
>
> Research into this problem indicates that there is no reliable way to get
> Postgresql to run off of RAM memory completely (
> http://stackoverflow.com/a/24235439/830964). Assuming the table and its
> indexes amount to 15 gb of data  on the disk and the machine contains 64GB
> of RAM with shared buffers placed at anywhere from 16-24 GB, here are my
> questions...
>
> 1. When postgresql returns data from this query, how can I tell how much
> of the data was cached in memory?
>
>
It depends which memory you're talking about. If you mean pages that are in
the shared buffers then you can just

EXPLAIN (ANALYZE, BUFFERS) select ... from table;

You'll see Buffers: shared read=N if any buffers were "read from disk" but
keep in mind they still might not be coming from disk, they could be cached
by the operating system in memory.

Regards

David Rowley

--
 David Rowley                   http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to