----- Цитат от Tom Lane (t...@sss.pgh.pa.us), на 22.10.2011 в 19:19 -----

> Andres Freund <and...@anarazel.de> writes:
>> On Saturday, October 22, 2011 05:20:26 PM Tom Lane wrote:
>>> Huh?  In the case he's complaining about, the index is all in RAM.
>>> Sequentiality of access is not an issue (at least not at the page
>>> level --- within a page I suppose there could be cache-line effects).
> 
>> I was talking about L2/L3 caches...
> 
> Yeah, but unless you think cache lines cross page boundaries (and we do
> take pains to align the buffers on 8K addresses), there's not going to
> be any sequentiality effect.  Even if there were, it would only apply
> if the pages chanced to be adjacent in the buffer array, and there is no
> reason to expect that to be the case, for either seqscans or indexscans.
> 
>                       regards, tom lane

I worked on in-memory hash stables of parrot project. It is not the same as
btrees but the structure and memory layout are not that different - tupples are
going into pages etc.

I have benchmarked iterating over such hash tables - sequential scan
of the same table comes 20-30% faster than scan ordered by the hash value
of the key. And this is overhead only of CPU cache lines - the numbers of 
instructions executed on the processor are pretty much the same (counted by 
valgrind).

So I do think that if we have sequential scan of indexes (physical order) it
will help even when all the data is in the buffercache.

Best regards

--
Luben Karavelov

Reply via email to