On Mon, Jun 1, 2009 at 9:24 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: >> I'm concerned with whether you can be sure that the 999th time you >> run it the database won't randomly decide to declare a serialization >> failure for reasons you couldn't predict were possible. > > Now you're questioning whether SERIALIZABLE transaction isolation > level is useful. Probably not for everyone, but definitely for some.
No, I'm not. I'm questioning whether a serializable transaction isolation level that makes no guarantee that it won't fire spuriously is useful. Postgres doesn't take block level locks or table level locks to do row-level operations. You can write code and know that it's safe from deadlocks. Heikki proposed a list of requirements which included a requirement that you not get spurious serialization failures and you rejected that on the basis that that's not how MSSQL and Sybase implement it. I'm unhappy with the idea that if I access too many rows or my query conditions aren't written just so then the database will forget which rows I'm actually concerned with and "lock" other random unrelated records and possibly roll my transaction back even though my I had no way of knowing my code was at risk. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers