Re: [Fwd: Re: [GENERAL] GEQO randomness?]

2008-11-05 Thread Joshua Tolley
On Fri, Oct 31, 2008 at 06:57:25PM -0400, Eric Schwarzenbach wrote:
> 
> My problem with GEQO using a random number generator is that
> non-deterministic behavior is really hard to debug, and problems can go
> undiagnosed for ages. Frankly I would rather something fail all the
> time, than it work most of the time and fail just now and then. Never
> getting a good plan for a query would be an improvement because I would
> immediately be aware there's a problem and be forced to something about
> it, as opposed to maybe realizing there is going to *sometimes* be a
> problem.

Randomness is an integral part of all kinds of algorithms designed to
solve really hard problems in reasonable time, like GEQO. Such
algorithms show up all the time in data modeling and optimization
problems.

- Josh / eggyknap


signature.asc
Description: Digital signature


Re: [Fwd: Re: [GENERAL] GEQO randomness?]

2008-11-04 Thread Martijn van Oosterhout
On Fri, Oct 31, 2008 at 06:57:25PM -0400, Eric Schwarzenbach wrote:
> 
> My problem with GEQO using a random number generator is that
> non-deterministic behavior is really hard to debug, and problems can go
> undiagnosed for ages. Frankly I would rather something fail all the
> time, than it work most of the time and fail just now and then. Never
> getting a good plan for a query would be an improvement because I would
> immediately be aware there's a problem and be forced to something about
> it, as opposed to maybe realizing there is going to *sometimes* be a
> problem.

I havn't seem any explains here showing wildly different results, but
GEQO is not nearly as non-deterministic as you suggest. The problem is
that if you are joining 100 tables it might take the length of the
universe to check all combinations. GEQO is an algorithm to find a
pretty good plan, and by and large it works well.

If you don't like it, turn it off (it's called geqo_threshold of
something like that). Or post some actual examples of it going wrong.
Seems to me like your stats are out to lunch, which means postgres
can't find the best planeven if it wanted to.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] GEQO randomness?

2008-11-04 Thread Bruce Momjian
Eric Schwarzenbach wrote:
> This is in a sense a followup to my post with subject "Wildly erratic
> query performance". The more I think about it the only thing that makes
> sense of my results is if the query planner really WAS choosing my join
> order truly randomly each time. I went digging into the manual and
> Section 49.3.1. "Generating Possible Plans with GEQO" says
> 
> "In the initial stage, the GEQO code simply generates some possible join
> sequences at random."
> 
> Now ordinarily I would interpret this use of the word random loosely, to
> mean "arbitrarily" or using some non-meaningful selection criteria. But
> given what I am seeing, this leads me to consider that "random" is meant
> literally, and that it uses a random number generate to pick paths. Can
> someone confirm that this is the case?

Yes, "random" means random.

> Is this really a good idea? Is non-deterministic behavior really
> acceptable? I would think it would be much more sensible to have it
> operate deterministically (such as with some predetermined random
> sequence of numbers used repeatedly).

Uh, no one has ever asked for that.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Fwd: Re: [GENERAL] GEQO randomness?]

2008-11-04 Thread Eric Schwarzenbach

My problem with GEQO using a random number generator is that
non-deterministic behavior is really hard to debug, and problems can go
undiagnosed for ages. Frankly I would rather something fail all the
time, than it work most of the time and fail just now and then. Never
getting a good plan for a query would be an improvement because I would
immediately be aware there's a problem and be forced to something about
it, as opposed to maybe realizing there is going to *sometimes* be a
problem.

Suppose a complex query, like mine, had an even more rarely occurring
bad plan result, where as with mine, now and then the query would simply
go out to lunch for all intents and purposes and bog down the server for
the next 30 minutes.  But suppose that result was rarer than in my case,
and the developer never saw it, and blithely sent it out into
production. Every now and then the system would start performing
horribly and no one would know why. The developers might hear of it and
bring in the debugger, and perhaps simply never duplicate it because
it's so erratic. In fact, I'd be willing to bet there are any number of
production applications out in the wild using postgresql with that very
problem and the problem is just never traced back to postgresql.

I'm sorry if I sound strident, but I feel strongly about non-determinacy
in system being a Bad Thing, and wish to convey why. I understand from
the documentation that the postgresql team is aware the algorithm is not
ideal, and appreciate the non-triviality of replacing it. I do
appreciate your responses and your suggestions.

For my own case, I'll certainly be doing one or more of the alternatives
you mentioned (#1 for the short term, at least), and I've had #3 in mind
even before I ran into this problem (the only question is when I will
have time to do it).

Thanks again,

Eric



Tom Lane wrote:
> Eric Schwarzenbach <[EMAIL PROTECTED]> writes:
>   
>> Now ordinarily I would interpret this use of the word "random" loosely, to
>> mean "arbitrarily" or "using some non-meaningful selection criteria". But
>> given what I am seeing, this leads me to consider that "random" is meant
>> literally, and that it actually uses a random number generator to choose 
>> paths. Can
>> someone confirm that this really is the case?
>> 
>
> What it's doing is searching a subset of the space of all possible join
> orders.  It still picks the best (according to cost estimate) plan
> within that subset, but if you're unlucky there may be no very good plan
> in that subset.  And yes, there is a random number generator in there.
>
>   
>> If so, I is this really a good idea?
>> 
>
> The alternatives are not very appealing either ...
>
>   
>> I would think it would be much more sensible to have it
>> operate deterministically (such as with some predetermined random
>> sequence of numbers used repeatedly).
>> 
>
> ... in particular, that one's hardly a panacea.  For one thing, a
> not-unlikely outcome would be that you *never* get a good plan and thus
> don't even get a hint that you might be missing something.  For another,
> the data values used in the query and the current ANALYZE statistics
> also affect the search, which means that in the real world where those
> things change, you'd still be exposed to getting the occasional
> unexpectedly bad plan.
>
> There are a number of alternatives you can consider though:
>
> 1. Disable geqo or bump up the threshold enough that it's not used for
> your query.  Whether this is a feasible answer is impossible to say with
> the limited detail you've provided.  (Remember that potentially
> exponential search time.)
>
> 2. Increase geqo_effort to make the randomized search run a bit longer
> and examine more plans.  This just decreases the probability of losing,
> but maybe it will do so enough that you won't care anymore.
>
> 3. Figure out what's a good join order, rewrite your query to explicitly
> join in that order, and *decrease* join_collapse_limit to force the
> planner to follow that order instead of searching.  Permanent solution
> but the initial development effort is high, especially if you have a lot
> of different queries that need this treatment.
>
> 4. Write a better randomized-search algorithm and submit a patch ;-)
> We have good reason to think that the GEQO code is not a really
> intelligent approach to doing randomized plan searching --- it's based
> on an algorithm designed to solve traveling-salesman problems, which is
> not such a good match to join-order problems --- but no one's yet gotten
> motivated to replace it.
>
>   regards, tom lane
>
>   




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GEQO randomness?

2008-11-04 Thread Eric Schwarzenbach
This is in a sense a followup to my post with subject "Wildly erratic
query performance". The more I think about it the only thing that makes
sense of my results is if the query planner really WAS choosing my join
order truly randomly each time. I went digging into the manual and
Section 49.3.1. "Generating Possible Plans with GEQO" says

"In the initial stage, the GEQO code simply generates some possible join
sequences at random."

Now ordinarily I would interpret this use of the word random loosely, to
mean "arbitrarily" or using some non-meaningful selection criteria. But
given what I am seeing, this leads me to consider that "random" is meant
literally, and that it uses a random number generate to pick paths. Can
someone confirm that this is the case?

Is this really a good idea? Is non-deterministic behavior really
acceptable? I would think it would be much more sensible to have it
operate deterministically (such as with some predetermined random
sequence of numbers used repeatedly).

Eric



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Fwd: Re: [GENERAL] GEQO randomness?]

2008-10-31 Thread Eric Schwarzenbach

My problem with GEQO using a random number generator is that
non-deterministic behavior is really hard to debug, and problems can go
undiagnosed for ages. Frankly I would rather something fail all the
time, than it work most of the time and fail just now and then. Never
getting a good plan for a query would be an improvement because I would
immediately be aware there's a problem and be forced to something about
it, as opposed to maybe realizing there is going to *sometimes* be a
problem.

Suppose a complex query, like mine, had an even more rarely occurring
bad plan result, where as with mine, now and then the query would simply
go out to lunch for all intents and purposes and bog down the server for
the next 30 minutes.  But suppose that result was rarer than in my case,
and the developer never saw it, and blithely sent it out into
production. Every now and then the system would start performing
horribly and no one would know why. The developers might hear of it and
bring in the debugger, and perhaps simply never duplicate it because
it's so erratic. In fact, I'd be willing to bet there are any number of
production applications out in the wild using postgresql with that very
problem and the problem is just never traced back to postgresql.

I'm sorry if I sound strident, but I feel strongly about non-determinacy
in system being a Bad Thing, and wish to convey why. I understand from
the documentation that the postgresql team is aware the algorithm is not
ideal, and I appreciate the non-triviality of replacing it. And I do
appreciate your responses and your suggestions.

For my own case, I'll certainly be doing one or more of the alternatives
you mentioned (#1 for the short term, at least), and I've had #3 in mind
even before I ran into this problem (the only question is when I will
have time to do it).

Thanks again,

Eric

Tom Lane wrote:
> Eric Schwarzenbach <[EMAIL PROTECTED]> writes:
>   
>> Now ordinarily I would interpret this use of the word "random" loosely, to
>> mean "arbitrarily" or "using some non-meaningful selection criteria". But
>> given what I am seeing, this leads me to consider that "random" is meant
>> literally, and that it actually uses a random number generator to choose 
>> paths. Can
>> someone confirm that this really is the case?
>> 
>
> What it's doing is searching a subset of the space of all possible join
> orders.  It still picks the best (according to cost estimate) plan
> within that subset, but if you're unlucky there may be no very good plan
> in that subset.  And yes, there is a random number generator in there.
>
>   
>> If so, I is this really a good idea?
>> 
>
> The alternatives are not very appealing either ...
>
>   
>> I would think it would be much more sensible to have it
>> operate deterministically (such as with some predetermined random
>> sequence of numbers used repeatedly).
>> 
>
> ... in particular, that one's hardly a panacea.  For one thing, a
> not-unlikely outcome would be that you *never* get a good plan and thus
> don't even get a hint that you might be missing something.  For another,
> the data values used in the query and the current ANALYZE statistics
> also affect the search, which means that in the real world where those
> things change, you'd still be exposed to getting the occasional
> unexpectedly bad plan.
>
> There are a number of alternatives you can consider though:
>
> 1. Disable geqo or bump up the threshold enough that it's not used for
> your query.  Whether this is a feasible answer is impossible to say with
> the limited detail you've provided.  (Remember that potentially
> exponential search time.)
>
> 2. Increase geqo_effort to make the randomized search run a bit longer
> and examine more plans.  This just decreases the probability of losing,
> but maybe it will do so enough that you won't care anymore.
>
> 3. Figure out what's a good join order, rewrite your query to explicitly
> join in that order, and *decrease* join_collapse_limit to force the
> planner to follow that order instead of searching.  Permanent solution
> but the initial development effort is high, especially if you have a lot
> of different queries that need this treatment.
>
> 4. Write a better randomized-search algorithm and submit a patch ;-)
> We have good reason to think that the GEQO code is not a really
> intelligent approach to doing randomized plan searching --- it's based
> on an algorithm designed to solve traveling-salesman problems, which is
> not such a good match to join-order problems --- but no one's yet gotten
> motivated to replace it.
>
>   regards, tom lane
>
>   




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GEQO randomness?

2008-10-31 Thread Tom Lane
Eric Schwarzenbach <[EMAIL PROTECTED]> writes:
> Now ordinarily I would interpret this use of the word "random" loosely, to
> mean "arbitrarily" or "using some non-meaningful selection criteria". But
> given what I am seeing, this leads me to consider that "random" is meant
> literally, and that it actually uses a random number generator to choose 
> paths. Can
> someone confirm that this really is the case?

What it's doing is searching a subset of the space of all possible join
orders.  It still picks the best (according to cost estimate) plan
within that subset, but if you're unlucky there may be no very good plan
in that subset.  And yes, there is a random number generator in there.

> If so, I is this really a good idea?

The alternatives are not very appealing either ...

> I would think it would be much more sensible to have it
> operate deterministically (such as with some predetermined random
> sequence of numbers used repeatedly).

... in particular, that one's hardly a panacea.  For one thing, a
not-unlikely outcome would be that you *never* get a good plan and thus
don't even get a hint that you might be missing something.  For another,
the data values used in the query and the current ANALYZE statistics
also affect the search, which means that in the real world where those
things change, you'd still be exposed to getting the occasional
unexpectedly bad plan.

There are a number of alternatives you can consider though:

1. Disable geqo or bump up the threshold enough that it's not used for
your query.  Whether this is a feasible answer is impossible to say with
the limited detail you've provided.  (Remember that potentially
exponential search time.)

2. Increase geqo_effort to make the randomized search run a bit longer
and examine more plans.  This just decreases the probability of losing,
but maybe it will do so enough that you won't care anymore.

3. Figure out what's a good join order, rewrite your query to explicitly
join in that order, and *decrease* join_collapse_limit to force the
planner to follow that order instead of searching.  Permanent solution
but the initial development effort is high, especially if you have a lot
of different queries that need this treatment.

4. Write a better randomized-search algorithm and submit a patch ;-)
We have good reason to think that the GEQO code is not a really
intelligent approach to doing randomized plan searching --- it's based
on an algorithm designed to solve traveling-salesman problems, which is
not such a good match to join-order problems --- but no one's yet gotten
motivated to replace it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GEQO randomness?

2008-10-31 Thread Eric Schwarzenbach
This is in a sense a followup to my post with subject "Wildly erratic
query performance".

The more I think about it the only thing that makes sense of my results
is if the query planner really WAS choosing my join order truly randomly
each time. I went digging into the manual and Section 49.3.1.
"Generating Possible Plans with GEQO" says

"In the initial stage, the GEQO code simply generates some possible join
sequences at random."

Now ordinarily I would interpret this use of the word "random" loosely, to
mean "arbitrarily" or "using some non-meaningful selection criteria". But
given what I am seeing, this leads me to consider that "random" is meant
literally, and that it actually uses a random number generator to choose paths. 
Can
someone confirm that this really is the case?

If so, I is this really a good idea? Is non-deterministic behavior really
acceptable? I would think it would be much more sensible to have it
operate deterministically (such as with some predetermined random
sequence of numbers used repeatedly).

Eric


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general