From: "Pavel Stehule" <pavel.steh...@gmail.com>
2014-08-13 13:59 GMT+02:00 MauMau <maumau...@gmail.com>:
Are you concerned about the impactof collection overhead on the queries
diagnosed? Maybe not light, but I'm optimistic. Oracle has the track
record of long use, and MySQL provides performance schema starting from
5.6.
partially, I afraid about total performance (about impact on IO) - when we
use a usual tables, then any analyses without indexes are slow, so you
need
a indexes, and we cannot deferred index update. You should thinking about
retention policy - and without partitioning you got massive deletes. So I
cannot to imagine a usage of table based solution together with some
higher
load. Our MVCC storage is not practical for storing only inserted data,
and
some custom storage has no indexes - so this design is relative big
project.
I prefer a possibility to read log via SQL (maybe some FDW) than use
tables
for storing log. These tables can be relative very large in few days - and
we cannot to write specialized engine like MySQL simply.
I didn't mean performance statistics data to be stored in database tables.
I just meant:
* pg_stat_system_events is a view to show data on memory, which returns one
row for each event across the system. This is similar to V$SYSTEM_EVENT in
Oracle.
* pg_stat_session_events is a view to show data on memory, which returns one
row for each event on one session. This is similar to V$SESSION_EVENT in
Oracle.
* The above views represent the current accumulated data like other
pg_stat_xxx views.
* EXPLAIN ANALYZE and auto_explain shows all events for one query. The lock
waits you are trying to record in the server log is one of the events.
Regards
MauMau
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers