On Tue, Apr 28, 2009 at 3:02 PM, Robert Haas <robertmh...@gmail.com> wrote: > You may be right, but on the other hand, I'm not sure there's any > sense in NOT trying to model the impact of the additional heap > fetches.
Yeah, the flip side of the argument is that we generally try to do the best job we can modeling costs and let the arithmetic work out how it however it does because you never know what kind of wacky situations will arise planning queries and and the better the estimates the better your chance of coming up with a good plan. For example the planner may have other join orders which allow it to avoid accessing those records entirely. So the comparison with a nested loop might not be the only comparison that matters. It might be a case of whether to run a bitmap scan against this table or some scan against another table to drive the join. I have been running benchmarks comparing bitmap heap scans against index scans amongst other comparisons. I haven't done CVS head yet but on an older version I'm seeing with effective_io_concurrency set to 0 scanning 1000 random tuples throughout a 130G table (one searched tuple per page) on a machine with 64G of ram after repeated executions index scans settle down to about 245s vs 205s for bitmap scans (for 100 iterations). So they're about 16% faster for this use case. Incidentally with effective_io_concurrency set to 30 on this 30-drive raid the bitmap scans go down to 17s :) -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers