Euler Taveira de Oliveira wrote:

IIRC pgFouine shows exact percentage of query by type; this new GUC implies we
can not rely on it anymore. Also, you could skip us from logging long (and
bad) running queries. Another statistics data in pgFouine will suffer from the
same problem. I see your point in reducing amount of log data but I think it
will limit the tool usability (you can always start/stop statement collection
at runtime).

The accuracy of numbers suffers - yes. But this option basically gives you the choice of logging all queries for a short time (leaving out queries before and after that time period) or some queries for a long time (leaving out randomly selected queries in between). If you're interested in a bigger picture, the numbers will be more meaningful. (Also, if the number of queries is high enough for someone to use this option, it means that there will be a lot of samples anyway.)

For example, it might be more useful to log 1% of queries during the peak hours of each day of the month, than all queries during the peak hours of one day. Or you could log 0.01% of all queries always... Especially queries which are executed many times can be found just as well with this approach, and chronically slow queries will eventually pop up also. But if someone wants to specifically track any and all long queries, then there's the option to not enable log_duration_sample but to set log_min_duration_statement to something appropriate instead.

Another point is that during the time of the biggest load (= the most interesting time) full logging would increase I/O load and bias the results (and degrade the service quality). By spreading out the load, the bias is smaller and the logging can actually be enabled at all during peak hours.

Thanks for the feedback.

timo

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to