Scott Marlowe wrote:
Sure, but you have to trap that all the time.  The solution using a
cycling sequence keeps you from ever seeing that (unless you managed
to check out all 9,999 other values while still getting the current
one.  No locking needed, dozens of updaters running concurrently and
no need to track update errors.
Yep, that does sound like it'd be nicer, at least if locks are becoming free at a reasonable rate (ie you don't have to step through most of the table to find a free lock). I was working on the probably mistaken assumption that the OP wanted the "next" / "first" available slot, not any free slot.

If there are very few free locks at any given time I have the feeling the sequence approach could spend a lot of time just scanning through the table looking for free entries. Then again, using an aggregate subquery is far from free either, and it's a whole lot nicer to just repeat one statement until it succeeds rather than retrying the whole transaction if it conflicts with another (which will happen often if there's really high demand for locks).

In fact, both transactions trying to grab the lowest free lock is practically a recipe for serialization failures, making it even less attractive. With only two concurrent connections it'd work OK if one used min() and the other used max() ... but add another couple and you're in trouble.

The serial based approach sounds a fair bit better.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to