Greg Stark <[EMAIL PROTECTED]> writes:
> That's a valid point. The ms/cost factor may not be constant over time.
> However I think in the normal case this number will tend towards a fairly
> consistent value across queries and over time. It will be influenced somewhat
> by things like cache contention with other applications though.

I think it would be interesting to collect the numbers over a long
period of time and try to learn something from the averages.  The real
hole in Neil's original suggestion was that it assumed that comparisons
based on just a single query would be meaningful enough to pester the
user about.

> On further thought the real problem is that these numbers are only available
> when running with "explain" on. As shown recently on one of the lists, the
> cost of the repeated gettimeofday calls can be substantial. It's not really
> feasible to suggest running all queries with that profiling.

Yeah.  You could imagine a simplified-stats mode that only collects the
total runtime (two gettimeofday's per query is nothing) and the row
counts (shouldn't be impossibly expensive either, especially if we
merged the needed fields into PlanState instead of requiring a
separately allocated node).  Not sure if that's as useful though.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to