Title: RE: [HACKERS] RFC: built-in historical query time profiling
I see your point. The ugliness of log-parsing beckons.
Maybe it would make sense to use a separate log server machine, where they could be written to a database without impacting production?
Hackers,
I'd like to pose a problem we are facing (historical query time
profiling) and see if any of you interested backend gurus have
an opinion on the promise or design of a built-in backend
solution (optional built-in historical query time stats), and/or
willingness to consider such a
Ed L. [EMAIL PROTECTED] writes:
... We can do
this by writing programs to periodically parse log files for
queries and durations, and then centralizing that information
into a db for analysis, similar to pqa's effort.
That strikes me as exactly what you ought to be doing.
Suppose there
Ed L. wrote:
Hackers,
(some snippage...)
Our Problem: We work with 75+ geographically distributed pg
clusters; it is a significant challenge keeping tabs on
performance. We see degradations from rogue applications,
vacuums, dumps, bloating indices, I/O and memory shortages, and
so on.
On Wednesday March 23 2005 4:11, Mark Kirkwood wrote:
Is enabling the various postgresql.conf stats* options and
taking regular snapshots of pg_stat_activity a possible way to
get this?
I don't see how; the duration is the key measurement I'm after,
and I don't believe it is available
On Wednesday March 23 2005 3:34, Tom Lane wrote:
This is going to fall down on exactly the same objections that
have been made to putting the log messages themselves into
tables. The worst one is that a failed transaction would fail
to make any entry whatsoever. There are also performance,
Ed L. wrote:
On Wednesday March 23 2005 4:11, Mark Kirkwood wrote:
Is enabling the various postgresql.conf stats* options and
taking regular snapshots of pg_stat_activity a possible way to
get this?
I don't see how; the duration is the key measurement I'm after,
and I don't believe it is
On Wednesday March 23 2005 5:14, Mark Kirkwood wrote:
- decide on a snapshot interval (e.g. 30 seconds)
- capture pg_stat_activity every interval and save the results
in a timestamped copy of this view (e.g. add a column
'snap_time')
That might serve for some purposes, but log-parsing sounds