On 2016-01-13 17:26:43 +0300, Vladimir Sitnikov wrote:
> > consider e.g a table with one somewhat common and otherwise just unique 
> > values.> 
> So what?
> If I understand you properly, you mean: "if client sends unique binds
> first 5-6 executions and bad non-unique afterwards, then cached plan
> would be bad". Is that what you are saying?

That's one of several problems, yes. Generally using a very small sample
("bind values in the the first query"), to plan every future query isn't
going to be fun.

> I agree that is the corner-case for my suggestion.
> Is is really happening often?

Yes.

> I state the following:
> 1) It is way easier to debug & analyze.

Meh. That a prepared statement suddenly performs way differently
depending on which the first bind values are is not, in any way, easier
to debug.

> For instance: current documentation does *not* list a way to get a
> *generic plan*.

Which doensn't have anything to do with your proposal. That'd not change
with the change you propose.

> Is that obvious that "you just need to EXPLAIN ANALYZE EXECUTE *6
> times in a row*" just to get a generic plan?

No. And I hate that. I think it'd be very good to expand EXPLAIN's
output to include information about custom/generic plans.


> 3) What about "client sends top most common value 5 times in a row"?
> Why assume "it will stop doing that"?
> I think the better assumption is "it will continue doing that".

If 20% of your values are nonunique and the rest is unique you'll get
*drastically* different plans, each performing badly for the other case;
with the unique cardinality plan being extremly bad.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to