On Mon, Jan 13, 2014 at 12:58 PM, Heikki Linnakangas <hlinnakan...@vmware.com> wrote: > Well, even if you don't agree that locking all the conflicting rows for > update is sensible, it's still perfectly sensible to return the rejected > rows to the user. For example, you're inserting N rows, and if some of them > violate a constraint, you still want to insert the non-conflicting rows > instead of rolling back the whole transaction.
Right, but with your approach, can you really be sure that you have the right rejecting tuple ctid (not reject)? In other words, as you wait for the exclusion constraint to conclusively indicate that there is a conflict, minutes may have passed in which time other conflicts may emerge in earlier unique indexes. Whereas with an approach where values are locked, you are guaranteed that earlier unique indexes have no conflicting values. Maintaining that property seems useful, since we check in a well-defined order, and we're still projecting a ctid. Unlike when row locking is involved, we can make no assumptions or generalizations around where conflicts will occur. Although that may also be a general concern with your approach when row locking, for multi-master replication use-cases. There may be some value in knowing it cannot have been earlier unique indexes (and so the existing values for those unique indexes in the locked row should stay the same - don't many conflict resolution policies work that way?). -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers