Rod Taylor <[EMAIL PROTECTED]> writes:
>> One objection to this is that after moving "off the gold standard" of
>> 1.0 = one page fetch, there is no longer any clear meaning to the
>> cost estimate units; you're faced with the fact that they're just an
>> arbitrary scale.  I'm not sure that's such a bad thing, though.  For
>> instance, some people might want to try to tune their settings so that
>> the estimates are actually comparable to milliseconds of real time.

> Any chance that the correspondence to time could be made a part of the
> design on purpose and generally advise people to follow that rule?

We might eventually get to that point, but I'm hesitant to try to do it
immediately.  For one thing, I really *don't* want to get bug reports
from newbies complaining that the cost estimates are always off by a
factor of X.  (Not but what we haven't gotten some of those anyway :-()
In the short term I see us sticking to the convention that seq_page_cost
is 1.0 in a "typical" database, while anyone who's really hot to try to
make the other happen is free to experiment.

> If we could tell people to run *benchmark* and use those numbers
> directly as a first approximation tuning, it could help quite a bit
> for people new to PostgreSQL experiencing poor performance.

We don't have such a benchmark ... if we did, we could have told
people how to use it to set the variables already.  I'm very very
suspicious of any suggestion that it's easy to derive appropriate
numbers for these settings from one magic benchmark.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to