On 1/16/15 11:35 AM, Pavel Stehule wrote:


2015-01-16 18:23 GMT+01:00 Jim Nasby <jim.na...@bluetreble.com 
<mailto:jim.na...@bluetreble.com>>:

    On 1/16/15 11:00 AM, Pavel Stehule wrote:

        Hi all,

        some time ago, I proposed a lock time measurement related to query. A 
main issue was a method, how to show this information. Today proposal is little 
bit simpler, but still useful. We can show a total lock time per database in 
pg_stat_database statistics. High number can be signal about lock issues.


    Would this not use the existing stats mechanisms? If so, couldn't we do this per 
table? (I realize that won't handle all cases; we'd still need a 
"lock_time_other" somewhere).



it can use a current existing stats mechanisms

I afraid so isn't possible to assign waiting time to table - because it depends 
on order

Huh? Order of what?

    Also, what do you mean by 'lock'? Heavyweight? We already have some 
visibility there. What I wish we had was some way to know if we're spending a 
lot of time in a particular non-heavy lock. Actually measuring time probably 
wouldn't make sense but we might be able to count how often we fail initial 
acquisition or something.


now, when I am thinking about it, lock_time is not good name - maybe "waiting lock 
time" (lock time should not be interesting, waiting is interesting) - it can be 
divided to some more categories - in GoodData we use Heavyweight, pages, and others 
categories.

So do you see this somehow encompassing locks other than heavyweight locks? 
Because I think that's the biggest need here. Basically, something akin to 
TRACE_POSTGRESQL_LWLOCK_WAIT_START() that doesn't depend on dtrace.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to