On Thursday 16 July 2009 17:16:31 Tom Lane wrote:
> Andres Freund <and...@anarazel.de> writes:
> > On Thursday 16 July 2009 15:13:02 Tom Lane wrote:
> >> Andres Freund <and...@anarazel.de> writes:
> >>> "Error: Failed to make a valid plan"
> >>
> >> We're not going to be able to fix this unless you show us examples.
> >
> > In the other thread I attached a similar to the real schema + example
> > query. Not enough? And why?
>
> I tried the example query and couldn't get "Failed to make a valid plan"
> out of it ... what settings do you need for that?
It unfortunately depends on settings and luck. This dependence on luck was the 
reason why I liked geqo to behave "somewhat" deterministically...

With  {join,from}_collapse_limit = 100 it seems to be triggered reliably. With 
lower values it seems harder trigger, with bigger it simply takes too long to 
even get there.

Efficiencywise using geqo with higher limits nearly all time is spent in:

geqo
gimme_tree 
have_join_order_restriction
has_legal_joinclause
have_relevant_joinclause
have_relevant_eclass (30% self)
bms_overlap (50%self)

I am not yet fully understanding geqo, but it looks like there are some 
possibilities to improve this. 
Although such efficiency improvements would no not explain the completely 
failing plans...

Do you have an idea which kind of plans benefit most from using geqo? I had a 
somewhat hard time finding any query were geqo was substantially faster than 
the standard join search.

That also somewhat explains why I saw improvements with 64bit bitmapsets...


> However, I do observe that this seems a sufficient counterexample
> against the theory that we can just remove the collapse limits and let
> GEQO save us on very complex queries.  On my machine, the example query
> takes about 22 seconds to plan using CVS HEAD w/ all default settings.
> If I set both collapse_limit variables to very high values (I used 999),
> it takes ... um ... not sure; I gave up waiting after half an hour.
> I also tried with geqo_effort reduced to the minimum of 1, but that
> didn't produce a plan in reasonable time either (I gave up after ten
> minutes).  So if we remove the collapse limits, Postgres will completely
> fail on this query --- the only way out would be enable_join_ordering =
> off, which is hardly likely to produce a decent plan.
> Maybe we should leave the collapse_limit logic alone and address
> Robert's gripes by just raising the default values a lot (I'm thinking
> 100 or so).  That way there's an escape hatch for anyone who has
> pathological queries to deal with --- just dial the settings down.
Yes, I think thats sensible. I don't know if there are any queries out there 
that benefit from a higher limits.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to