"Jonah H. Harris" <[EMAIL PROTECTED]> writes:
> As for using both in the same optimizer, I could only see an algorithm such 
> as a customized-A* being used to planning *some* large queries. The reason I 
> say this is because the cost calculation, which would still need to be 
> breadth-first, could calculate and cache the cost of most nodes thereby 
> allowing you to possibly perform transformations at the tail of calculation.

We do already have two different plan search algorithms: the strict
bottom-up dynamic programming approach (System R style) and the GEQO
optimizer, which we switch to when there are too many joins needed to
allow exhaustive search.  The GEQO code still depends on the normal
plan cost estimation code, but it doesn't consider every possible plan.

I've never been very happy with the GEQO code: the random component of
the algorithm means you get unpredictable (and sometimes awful) plans,
and the particular variant that we are using is really designed to solve
traveling-salesman problems.  It's at best a poor fit to the join
planning problem.

So it seems interesting to me to think about replacing GEQO with a
rule-based optimizer for large join search spaces.

There are previous discussions about this in the archives, I believe.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to