I wrote: > I don't see how SSI can be modified to generate some other form of > serialization failure here, but I'm always open to suggestions. Actually, after thinking about it a bit more, the UPDATE statements *do* read the rows before writing, so a naive implementation would see write skew in Josh's example and generate a rollback before things got far enough to cause a deadlock. In fact, a few months ago the implementation probably would have done so, before we implemented the optimization mentioned in section 3.7.3 of Cahill's doctoral thesis[1]. The reasons for implementing that change were: (1) It avoids getting an SIREAD lock on a row if that row has been updated by the transaction. I believe that in the PostgreSQL implementation we even avoid taking the SIREAD lock when we're in a scan from an UPDATE or DELETE statement, but I'd have to dig into the code to confirm. (2) Because of (1) and the removal of an SIREAD lock on a row is later updated, the shared memory structures used for tracking SIREAD locks can be somewhat smaller and access to them will be a bit faster. (3) I *think* that having the additional SIREAD locks would tend to increase the false positive rate, although I'd need to spend some time working through that to be sure. So, the question would be: does this "optimization" from the paper actually improve performance because of the above points more than the savings which would accrue from catching the conflict in Josh's example before it gets to the point of deadlock? I can add that to the list of things to check once we have a good set of benchmarks. -Kevin [1] http://hdl.handle.net/2123/5353
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers