At work we have a 24 cores server, with a load average around 2.5. I don't know yet if a system which use some unused CPU to minimize the load of a bad query identified early is bad or worse.
Indeed, I don't know if my boss would let me test this at production too, but it could be good to know how things work in "auto-pilot" mode. 2011/12/10 Tomas Vondra <t...@fuzzy.cz> > There's auto_explain contrib module that does exactly what you're asking > for. Anyway, explain analyze is quite expensive - think twice before > enabling that on production server where you already have performance > issues. > > Tomas > > On 10.12.2011 17:52, Daniel Cristian Cruz wrote: > > Hi all, > > > > I'm trying to figure out some common slow queries running on the server, > > by analyzing the slow queries log. > > > > I found debug_print_parse, debug_print_rewritten, debug_print_plan, > > which are too much verbose and logs all queries. > > > > I was thinking in something like a simple explain analyze just for > > queries logged with log_min_duration_statement with the query too. > > > > Is there a way to configure PostgreSQL to get this kind of information, > > maybe I'm missing something? Is it too hard to hack into sources and do > > it by hand? I never touched PostgreSQL sources. > > > > I'm thinking to write a paper that needs this information for my > > postgraduate course. The focus of my work will be the log data, not > > PostgreSQL itself. If I succeed, maybe it can be a tool to help all of > us. > > > > Thank you, > > -- > > Daniel Cristian Cruz > > クルズ クリスチアン ダニエル > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- Daniel Cristian Cruz クルズ クリスチアン ダニエル