On Wed, Jul 25, 2012 at 2:51 PM, Peter Geoghegan <pe...@2ndquadrant.com> wrote:
> On 3 March 2012 20:22, Jeff Janes <jeff.ja...@gmail.com> wrote:
>> Add it all up, and instead of pre-reading 32 consecutive 8K blocks, it
>> pre-reads only about 1 or 2 consecutive ones on the final merge.  Now
>> some of those could be salvaged by the kernel keeping track of
>> multiple interleaved read ahead opportunities, but in my hands vmstat
>> shows a lot of IO wait and shows reads that seem to be closer to
>> random IO than large read-ahead.  If it used truly efficient read
>> ahead, CPU would probably be limiting.
>
> Can you suggest a benchmark that will usefully exercise this patch?

I think the given sizes below work on most 64 bit machines.



unpatched:

jeff=# set work_mem=16384;
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524200)) asdf;
Time: 498.944 ms
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524300)) asdf;
Time: 909.125 ms

patched:

jeff=# set work_mem=16384;
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524200)) asdf;
Time: 493.208 ms
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524300)) asdf;
Time: 497.035 ms


If you want to get a picture of what is going on internally, you can set:

set client_min_messages =log;
set trace_sort = on;

(Although trace_sort isn't all that informative as it currently
exists, it does at least let you see the transition from internal to
external.)

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to