"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > Speaking of plan instability, something that's badly needed is the > ability to steer away from query plans that *might* be the most optimal, > but also will fail horribly should the cost estimates be wrong.
You sure that doesn't leave us with the empty set :-( ? Any plan we pick will be horrible under some scenario. I do agree that the current lowest-cost-and-nothing-else criterion can pick some pretty brittle plans, but it's not that easy to see how to improve it. I don't think either "best case" or "worst case" are particularly helpful concepts here. You'd almost have to try to develop an estimated probability distribution, and that's way more info than we have. > People generally don't care about getting the absolutely most optimal > plan; they do care about NOT getting a plan that's horribly bad. If 8.2 runs a query at half the speed of 8.1, people will be unhappy, and they won't be mollified if you tell them that that plan is "better" because it would have degraded less quickly if the planner's estimates were wrong. The complaint that started this thread (Philippe Lang's a couple days ago) was in fact of exactly that nature: his query was running slower than it had been because the planner was picking bitmap scans instead of plain index scans. Well, those bitmap scans would have been a lot more robust in the face of bad rowcount estimates, but the estimates weren't wrong. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq