Robert Haas <robertmh...@gmail.com> writes:
> Yeah, I thought about this too, but it seems like overkill for the
> problem at hand, and as you say it's not clear you'd get any benefit
> out of the upper bound anyway.  I was thinking of something simpler:
> instead of directly multiplying 0.005 into the selectivity every time
> you find something incomprehensible, keep a count of the number of
> incomprehensible things you saw and at the end multiply by 0.005/N.
> That way more unknown quals look more restrictive than fewer, but
> things only get linearly wacky instead of exponentially wacky.

clauselist_selectivity could perhaps apply such a heuristic, although
I'm not sure how it could recognize "default" estimates from the various
specific estimators, since they're mostly all different.

Personally I've not seen all that many practical cases where the
estimator simply hasn't got a clue at all.  What's far more commonly
complained of IME is failure to handle *correlated* conditions in
an accurate fashion.  Maybe we should just discount the product
selectivity all the time, not only when we think the components are
default estimates.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to