For unknown reasons, our PG server died overnight. (PG 7.4.2, RedHat 8,
ext3 ordered data mode, and battery backed RAID5) When it came back up, I
saw an unfamiliar line in the recovery output from Postgres:
Apr 20 11:28:14 postgres: [6203] LOG: database system was interrupted at
2004-04-20
A checkpoint would also have reason to wait for a page-level lock, if
the stuck backend was holding one. I am wondering though if the stuck
condition consistently happens while trying to fire a trigger? That
would be very interesting ... not sure what it'd mean though ...
Hmm. I'm really at
It looks to me like the guy doing VACUUM is simply waiting for the other
guy to release a page-level lock. The other guy is running a deferred
trigger and so I'd expect him to be holding one or two page-level locks,
on the page or pages containing the tuple or tuples passed to the
trigger.
I'm back with more on the funky glibc-syslog-Postgres deadlocking behavior:
It's really too bad that your gdb backtrace didn't show anything past
the write_syslog level (which is an elog subroutine). If we could see
whether the elog had been issued from a signal handler, and if so what
it
I'm encountering strange hangs in postgresql backends at random moments.
They seem to be associated with attempts to issue log entries via syslog.
I have run backtraces on the hung backends a few times, and they routinely
trace into system libraries where it looks like a stuck syslog call. So
far,
Arthur Ward [EMAIL PROTECTED] writes:
I'm encountering strange hangs in postgresql backends at random
moments. They seem to be associated with attempts to issue log entries
via syslog. I have run backtraces on the hung backends a few times,
and they routinely trace into system libraries where