Jeff Davis <pg...@j-davis.com> wrote: > On Mon, 2009-06-01 at 22:12 +0100, Greg Stark wrote: >> No, I'm not. I'm questioning whether a serializable transaction >> isolation level that makes no guarantee that it won't fire >> spuriously is useful. > > I am also concerned (depending on implementation, of course) that > certain situations can make it almost certain that you will get > serialization failures every time. For instance, a change in the > heap order, or data distribution, could mean that your application > is unable to make progress at all. > > Is this a valid concern, or are there ways of avoiding this > situation? I've been concerned about that possibility -- in the traditional blocking implementations it is OK to attempt the retry almost immediately, since a conflicting transaction should then block you until one of the original transactions in the conflict completes. It appears to me that with the proposed technique you could jump back in and hit exactly the same combination of read-write dependencies, leading to repeated rollbacks. I'm not happy with the thought of trying to handle that with simple delays (or even escalating delays) before retry. I'm not sure how big a problem this is likely to be in practice, so I've been trying to avoid the trap of premature optimization on this point. But a valid concern? Certainly. > I would think that we'd need some way to detect that this is > happening, give it a few tries, and then resort to full > serialization for a few transactions so that the application can > make progress. I'd hate to go to actual serial execution of all serializable transactions. Perhaps we could fall back to traditional blocking techniques based on some heuristic? That would create blocking, and would lead to occassional deadlocks; however, it might be the optimal fix, if this is found to actually be a problem. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers