Tom Lane wrote:
One objection to this is that after moving "off the gold standard" of
1.0 = one page fetch, there is no longer any clear meaning to the
cost estimate units; you're faced with the fact that they're just an
arbitrary scale. I'm not sure that's such a bad thing, though.
It seems to me the appropriate gold standard is Time, in microseconds
or milliseconds.
The default postgresql.conf can come with a set of hardcoded
values that reasonably approximate some real-world system; and
if that's documented in the file someone reading it can say
"hey, my CPU's about the same but my disk subsystem is much
faster, so I know in which direction to change things".
And another person may say "ooh, now I know that my 4GHz
machines should have about twice the number here as my 2GHz
box".
For people who *really* care a lot (HW vendors?), they could
eventually make measurements on their systems.
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly