On 11.05.2012 16:52, Tom Lane wrote:
Heikki Linnakangas<heikki.linnakan...@enterprisedb.com>  writes:
I wonder if we should reserve a few of the lwlock "slots" for critical
sections, to make this less likely to happen. Not only in this case, but
in general. We haven't seen this problem often, but it would be quite
trivial to reserve a few slots.

I'm against that: it would complicate a performance-critical and
correctness-critical part of the code, in return for what exactly?
IMO, no part of the system should ever get within an order of magnitude
of holding 100 LWLocks concurrently.

I agree we should never get anywhere near that limit. But if we do - because of another bug like this one - it would be nice if it was just an ERROR, instead of a PANIC.

For one thing, I don't believe
it's possible to statically guarantee no deadlock once things get that
messy; and for another, it'd surely be horrible from a concurrency
standpoint.

Well, for example in the case of a gist page split that splits a page into a hundred pages, all but one of the pages involved is previously unused. It's quite easy to guarantee that's deadlock free. It's nevertheless not a good idea in practice to do that, of course.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to