On 24/08/2013 21:16, Tom Lane wrote:
Marc Cousin <cousinm...@gmail.com> writes:
On 23/08/2013 23:55, Tom Lane wrote:
My previous suggestion was to estimate planning cost as
10 * (length(plan->rangetable) + 1)
but on reflection it ought to be scaled by one of the cpu cost constants,
so perhaps
1000 * cpu_operator_cost * (length(plan->rangetable) + 1)
which'd mean a custom plan has to be estimated to save a minimum of
about 5 cost units (more if more than 1 table is used) before it'll
be chosen.  I'm tempted to make the multiplier be 10000 not 1000,
but it seems better to be conservative about changing the behavior
until we see how well this works in practice.

Objections, better ideas?
No better idea as far as I'm concerned, of course :)
But it is a bit tricky to understand what is going on when you get
hit by it, and using a very approximated cost of the planning time
seems the most logical to me. So I'm all for this solution.
I've pushed a patch along this line.  I verified it fixes your original
example, but maybe you could try it on your real application?
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=005f583ba4e6d4d19b62959ef8e70a3da4d188a5

                        regards, tom lane
I think that won't be possible :(

It's one of those environments where you have to ask lots of permissions before doing anything. I'll do my best to have them do a test with this patch.

Thanks a lot.

Marc


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to