On Thu, 4 Sep 2003, David F. Skoll wrote: > > Zapping clients that are in the middle of database operations is bad > > design IMHO. > > It's required. The clients are e-mail filters and they must reply > quickly, before the end of the SMTP transaction. If they take too long, > they must be killed so the SMTP transaction can be tempfailed. If they > are not killed, the SMTP sessions pile up and eventually kill the machine.
It might be worth racking your brains to think of other ways. Query timeouts? > > That's correct, a backend will generally not notice client disconnect > > until it next waits for a client command. It's not totally clear why > > you've got so many processes waiting to update the same row, though. > > It's on a high-volume mail server that receives around 500K > messages/day. About 180,000 of those are viruses, so we often have > multiple processes trying to update the virus statistics row. > > > Which process does have the row lock, and why isn't it completing its > > transaction? > > I don't know the details of PostgreSQL's implementation, but it seems > that when lots of processes are waiting to update the same row, it > gets incredibly slow. All trying to access the same row seems a bad idea generally. Instead, why not make it store a new record for each instance, and have a cronjob each day update the statistics from that. It will be more efficient, overall. It can be done hourly, even. -- Sam Barnett-Cormack Software Developer | Student of Physics & Maths UK Mirror Service (http://www.mirror.ac.uk) | Lancaster University ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend