Robert Haas <robertmh...@gmail.com> writes: > On Tue, Apr 28, 2009 at 3:51 AM, Greg Stark <st...@enterprisedb.com> wrote: >> If the logic you're suggesting would kick in at all it would be for a >> narrow range of scan sizes,
> You may be right, but on the other hand, I'm not sure there's any > sense in NOT trying to model the impact of the additional heap > fetches. I think it's probably useless. In the first place, at reasonable values of work_mem the effect is going to be negligible (in the sense that a plain indexscan would never win). In the second place, there isn't any way to guess the extent of lossiness at plan time --- it depends on how much the target rows are "clumped" on particular pages. The planner hasn't got any stats that would let it guess that, and even if we tried to collect such stats they'd probably be too unstable to be useful. There are boatloads of effects that the planner doesn't model. This one seems very far down the list of what we should worry about. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers