On Thu, Aug 6, 2015 at 1:01 PM, Ildus Kurbangaliev <
i.kurbangal...@postgrespro.ru> wrote:

> On 08/05/2015 09:33 PM, Robert Haas wrote:
>
>> On Wed, Aug 5, 2015 at 1:10 PM, Ildus Kurbangaliev
>> <i.kurbangal...@postgrespro.ru> wrote:
>>
>>> About `memcpy`, PgBackendStatus struct already have a bunch of multi-byte
>>> variables,  so it will be
>>> not consistent anyway if somebody will want to copy it in that way. On
>>> the
>>> other hand two bytes in this case
>>> give less overhead because we can avoid the offset calculations. And as
>>> I've
>>> mentioned before the class
>>> of wait will be useful when monitoring of waits will be extended.
>>>
>> You're missing the point.  Those multi-byte fields have additional
>> synchronization requirements, as I explained in some detail in my
>> previous email. You can't just wave that away.
>>
> I see that now. Thank you for the point.
>
> I've looked deeper and I found PgBackendStatus to be not a suitable
> place for keeping information about low level waits. Really,
> PgBackendStatus
> is used to track high level information about backend. This is why
> auxiliary
> processes don't have PgBackendStatus, because they don't have such
> information
> to expose. But when we come to the low level wait events then auxiliary
> processes are as useful for monitoring as backends are. WAL writer,
> checkpointer, bgwriter etc are using LWLocks as well. This is certainly
> unclear
> why they can't be monitored.
>
> This is why I think we shoudn't place wait event into PgBackendStatus. It
> could be placed into PGPROC or even separate data structure with different
> concurrency model which would be most suitable for monitoring.


+1 for tracking wait events not only for backends

Ildus, could you do following?
1) Extract LWLocks refactoring into separate patch.
2) Make a patch with storing current wait event information in PGPROC.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Reply via email to