On Aug 13, 2008, at 3:52 PM, Decibel! wrote:
The problem is that by looking for a constant row, you're actually
eliminating the entire effect being tested, because the uncorrelated
subselect is run ONCE as an initplan, and the entire query time is
then spent elsewhere. The differences in runtime you're seeing are
pure noise (the fact that you had to increase the iteration count so
much should have been a clue here).

Makes sense, and yeah, I was wondering a bit about that. I'll try to
fake it out with offset 0 later on if no one beats me to it; I do still think we could just be seeing the effect of slogging through 200 tuples
instead of going directly to the one we want.


OK, ran the test again via this query:

explain analyze select (select value from oneblock where id = i) from generate_series(1,1) i, generate_series(1,100000) j;

changing 1,1 to 200,200 as needed. I don't see any meaningful differences between 1,1 and 200,200. The seqscan case is still notably slower than the index case (~5500ms vs ~800ms).

It'd be useful to get strace data on this, but OS X doesn't have that :/ (and I'm on 10.4 so no dtrace either). Can someone get an strace from this?
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to