Hi PG devs!

Tom Lane <t...@sss.pgh.pa.us> writes:

>> Wait for first IO, issue second IO request
>> Compute
>> Already have second IO request, issue third
>> ...
>
>> We'd be a lot less sensitive to IO latency.
>
> It would take about five minutes of coding to prove or disprove this:
> stick a PrefetchBuffer call into heapgetpage() to launch a request for the
> next page as soon as we've read the current one, and then see if that
> makes any obvious performance difference.  I'm not convinced that it will,
> but if it did then we could think about how to make it work for real.

Sorry for dropping in so late...

I have done all this two years ago.  For TPC-H Q8, Q9, Q17, Q20, and Q21
I see a speedup of ~100% when using IndexScan prefetching + Nested-Loops
Look-Ahead (the outer loop!).
(On SSD with 32 Pages Prefetch/Look-Ahead + Cold Page Cache / Small RAM)

Regards,
Daniel
-- 
MSc. Daniel Bausch
Research Assistant (Computer Science)
Technische Universität Darmstadt
http://www.dvs.tu-darmstadt.de/staff/dbausch



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to