Mark Mielke wrote:

Re-planning a generic plan with another generic plan may generate zero benefit, with a measurable cost. More on this after...

Nobody's talking about doing that any more. I proposed it initially because I didn't know about changes that made it unnecessary.


All the points about ms seem invalid to me. There are many reason why ms could increase, and many of them have nothing to do with plan efficiency. Again, re-planning due to a high ms, or a high ratio of ms, does not indicate that re-planning will improve the success of the plan. The planning process does not measure ms or predict ms.

That's true, but missing some very basic points about the idea: one, if we can tell that a query is going to be expensive, then the cost of re-planning it is marginal. Two, if we can tell that a query is going to be expensive, then we stand a lot to gain if re-planning turns out to be useful. It follows that we can afford to re-plan on the off-chance, without anything more than a vague orders-of-magnitude idea of what "expensive" means.

What Tom said validates a big assumption I've been making: that we do in fact have a decent shot at telling in advance that a query is going to be expensive. Which means we have a decent shot at stopping your 100ms query from taking seconds just because you prepared it and are missing out on that tiny partial index. That would be worth the extra planning time at a 1% hit rate, and there's not much downside if we don't reach that.


My idea of an optimal system is as follows:

1) Prepare gathers and caches data about the tables involved in the query, including column statistics that are likely to be required during the planning process, but prepare does not running the planning process.

It sounds to me like you're in the process of inventing another planning process. Developer time aside, how much CPU time can you afford to throw at this?

I don't see any reason to argue over what would be optimal when so much information is still missing. It just makes the problem look harder than it is. To me, our best shot at getting something useful is to stay simple and defensive. After that, if there is still a need, we'll have code to help us gather more data and figure out how to make it better. Nothing wrong with the lowest-hanging fruit.


Jeroen

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to