> In general I suspect that we'd be better off focussing on mitigating > the impact at execution time. There are at least a few things that we > could do there, at least in theory. Mostly very ambitious, long term > things.
I think these things are orthogonal. No matter how good the cost model ever gets, we will always have degenerate cases. Having some smarts about that in the executor is surely a good thing, but it shouldn't distract us from improving on the planner front. > > I like the idea of just avoiding unparameterized nested loop joins > altogether when an "equivalent" hash join plan is available because > it's akin to an execution-time mitigation, despite the fact that it > happens during planning. While it doesn't actually change anything in > the executor, it is built on the observation that we have virtually > everything to gain and nothing to lose during execution, no matter > what happens. I agree with you, that those plans are too risky. But let's maybe find a more general way of dealing with this. > Right. Though I am actually sympathetic to the idea that users might > gladly pay a cost for performance stability -- even a fairly large > cost. That part doesn't seem like the problem.