On Aug 20, 2009, at 11:18 PM, Josh Berkus wrote:
I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.
Well, the way to do this is via a risk-confidence system. That is,
each
operation has a level of risk assigned to it; that is, the cost
multiplier if the estimates are wrong. And each estimate has a
level of
confidence attached. Then you can divide the risk by the confidence,
and if it exceeds a certain level, you pick another plan which has a
lower risk/confidence level.
However, the amount of extra calculations required for even a simple
query are kind of frightning.
Would it? Risk seems like it would just be something along the lines
of the high-end of our estimate. I don't think confidence should be
that hard either. IE: hard-coded guesses have a low confidence.
Something pulled right out of most_common_vals has a high confidence.
Something estimated via a bucket is in-between, and perhaps adjusted
by the number of tuples.
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers