On Dec20, 2010, at 13:13 , Heikki Linnakangas wrote: > One way to look at this is that the problem arises because SELECT FOR UPDATE > doesn't create a new tuple like UPDATE does. The problematic case was: > >> T1 locks, T1 commits, T2 updates, T2 aborts, all after T0 >> took its snapshot but before T0 attempts to delete. :-( > > If T1 does a regular UPDATE, T2 doesn't overwrite the xmax on the original > tuple, but on the tuple that T1 created.
> So one way to handle FOR UPDATE would be to lazily turn the lock operation by > T1 into a dummy update, when T2 updates the tuple. You can't retroactively > make a regular update on behalf of the locking transaction that committed > already, or concurrent selects would see the same row twice, but it might > work with some kind of a magic tuple that's only followed through the ctid > from the original one, and only for the purpose of visibility checks. In the case of an UPDATE of a recently locked tuple, we could avoid having to insert a dummy tuple by storing the old tuple's xmax in the new tuple's xmax. We'd flag the old tuple, and attempt to restore the xmax of any flagged tuple with an aborted xmax and a ctid != t_self during scanning and vacuuming. For DELETEs, that won't work. However, could we maybe abuse the ctid to store the old xmax? It currently contains t_self, but do we actually depend on that? FOR-SHARE and FOR-UPDATE locks could preserve information about the latest committed locker by creating a multi-xid. For FOR-SHARE locks, we'd just need to ensure that we only remove all but one finished transactions. For FOR-UPDATE locks, we'd need to create a multi-xid if the old xmax is >= GlobalXmin, but I guess that's tolerable. best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers