Bruce Momjian br...@momjian.us wrote:
Added to TODO:
Consider improving serialized transaction behavior to avoid
anomalies
*
http://archives.postgresql.org/pgsql-hackers/2009-05/msg01136.php
*
http://archives.postgresql.org/pgsql-hackers/2009-06/msg00035.php
It might be worth
On Thu, Jun 4, 2009 at 11:32 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I was going to try to scare up some resources to advance this if we
could get to some consensus. I don't get the feeling we're there yet.
Suggestions welcome.
I think I might've said this before, but I think
Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 4, 2009 at 11:32 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I was going to try to scare up some resources to advance this if we
could get to some consensus. I don't get the feeling we're there
yet. Suggestions welcome.
I
Kevin Grittner wrote:
Bruce Momjian br...@momjian.us wrote:
Added to TODO:
Consider improving serialized transaction behavior to avoid
anomalies
*
http://archives.postgresql.org/pgsql-hackers/2009-05/msg01136.php
*
Added to TODO:
Consider improving serialized transaction behavior to avoid anomalies
* http://archives.postgresql.org/pgsql-hackers/2009-05/msg01136.php
* http://archives.postgresql.org/pgsql-hackers/2009-06/msg00035.php
Hi,
Quoting Greg Stark st...@enterprisedb.com:
No, I'm not. I'm questioning whether a serializable transaction
isolation level that makes no guarantee that it won't fire spuriously
is useful.
It would certainly be an improvement compared to our status quo, where
truly serializable
On Tue, Jun 2, 2009 at 1:13 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Greg Stark st...@enterprisedb.com wrote:
Just as carefully written SQL code can be written to avoid deadlocks
I would expect to be able to look at SQL code and know it's safe
from serialization failures, or at
Markus Wanner mar...@bluegap.ch wrote:
What I'm more concerned is the requirement of the proposed algorithm
to keep track of the set of tuples read by any transaction and keep
that set until sometime well after the transaction committed (as
questioned by Neil). That doesn't sound like a
Greg Stark st...@enterprisedb.com wrote:
On Tue, Jun 2, 2009 at 1:13 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Greg Stark st...@enterprisedb.com wrote:
Just as carefully written SQL code can be written to avoid
deadlocks
I would expect to be able to look at SQL code and know
On Tue, Jun 2, 2009 at 2:44 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Even in your environment I could easily imagine, say, a monthly job
to
delete all records older than 3 months. That job could take hours or
even days. It would be pretty awful for it to end up needing to be
Greg Stark st...@enterprisedb.com wrote:
On Tue, Jun 2, 2009 at 2:44 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
We have next to nothing which can be deleted after three months.
That's reassuring for a courts system.
:-)
But i said I could easily imagine. The point was
Hi,
Kevin Grittner wrote:
Greg Stark st...@enterprisedb.com wrote:
I would want any serialization failure to be
justifiable by simple inspection of the two transactions.
BTW, there are often three (or more) transaction involved in creating
a serialization failure, where any two of them
Robert Haas robertmh...@gmail.com wrote:
But at least it doesn't seem like anyone is seriously arguing that
true serializability wouldn't be a nice feature, if hypothetically
we had an agreed-upon implementation and a high-level developer with
a lot of time on their hands.
If that's true,
On Mon, Jun 1, 2009 at 6:27 PM, Markus Wanner mar...@bluegap.ch wrote:
I'm not that eager on the justifiable by simple inspection requirement
above. I don't think a DBA is commonly doing these inspections at all.
I think a tool to measure abort rates per transaction (type) would serve
the DBA
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Robert Haas robertmh...@gmail.com wrote:
But at least it doesn't seem like anyone is seriously arguing that
true serializability wouldn't be a nice feature, if hypothetically
we had an agreed-upon implementation and a high-level developer
Greg Stark st...@enterprisedb.com wrote:
But it's certainly insufficient in an OLAP or DSS environment where
transactions can take hours. If you can never know for sure that
you've written your transaction safely and it might randomly fail
and need to be retried any given day due to
Kevin,
I'm not sure it's without value to the project; I just don't know that
it would be worth using for us. It seems to be accepted in some other
DBMS products. Since some (like MS SQL Server) allow users to choose
snapshot isolation or blocking-based serializable transactions in
their MVCC
On Mon, Jun 1, 2009 at 7:24 PM, Josh Berkus j...@agliodbs.com wrote:
Since some (like MS SQL Server) allow users to choose
snapshot isolation or blocking-based serializable transactions in
their MVCC implementation
This approach allowed MSSQL to clean up on TPCE; to date their performance
Greg Stark st...@enterprisedb.com wrote:
If you can never know for sure that you've written your transaction
safely
Whoa! I just noticed this phrase on a re-read. I think there might
be some misunderstanding here.
You can be sure you've written your transaction safely just as soon as
On Mon, Jun 1, 2009 at 8:55 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Whoa! I just noticed this phrase on a re-read. I think there might
be some misunderstanding here.
You can be sure you've written your transaction safely just as soon as
your COMMIT returns without error.
I
On Mon, Jun 1, 2009 at 4:08 PM, Greg Stark st...@enterprisedb.com wrote:
On Mon, Jun 1, 2009 at 8:55 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Whoa! I just noticed this phrase on a re-read. I think there might
be some misunderstanding here.
You can be sure you've written your
Greg Stark st...@enterprisedb.com wrote:
On Mon, Jun 1, 2009 at 8:55 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
You can be sure you've written your transaction safely just as soon
as your COMMIT returns without error.
I think we have different definitions of safely. You only
Josh Berkus j...@agliodbs.com wrote:
This approach allowed MSSQL to clean up on TPCE; to date their
performance on that benchmark is so much better than anyone else
nobody else wants to publish.
Since they use a compatibility level setting to control whether a
request for a serializable
On Mon, Jun 1, 2009 at 9:24 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I'm concerned with whether you can be sure that the 999th time you
run it the database won't randomly decide to declare a serialization
failure for reasons you couldn't predict were possible.
Now you're
On Mon, 2009-06-01 at 22:12 +0100, Greg Stark wrote:
No, I'm not. I'm questioning whether a serializable transaction
isolation level that makes no guarantee that it won't fire spuriously
is useful.
I am also concerned (depending on implementation, of course) that
certain situations can make it
Greg Stark st...@enterprisedb.com wrote:
On Mon, Jun 1, 2009 at 9:24 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I'm concerned with whether you can be sure that the 999th time you
run it the database won't randomly decide to declare a
serialization failure for reasons you couldn't
Jeff Davis pg...@j-davis.com wrote:
On Mon, 2009-06-01 at 22:12 +0100, Greg Stark wrote:
No, I'm not. I'm questioning whether a serializable transaction
isolation level that makes no guarantee that it won't fire
spuriously is useful.
I am also concerned (depending on implementation, of
Josh Berkus j...@agliodbs.com wrote:
So, at least theoretically, anyone who had a traffic mix similar to
TPCE would benefit. Particularly, some long-running serializable
transactions thrown into a mix of Read Committed and Repeatable
Read transactions, for a stored procedure driven
On Mon, Jun 1, 2009 at 11:07 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Greg Stark st...@enterprisedb.com wrote:
No, I'm not. I'm questioning whether a serializable transaction
isolation level that makes no guarantee that it won't fire
spuriously is useful.
Well, the technique
Greg Stark st...@enterprisedb.com wrote:
Just as carefully written SQL code can be written to avoid deadlocks
I would expect to be able to look at SQL code and know it's safe
from serialization failures, or at least know where they might
occur.
This is the crux of our disagreement, I
Kevin Grittner wrote:
1. implementation of the paper's technique sans predicate locking,
that would avoid more serialization anomalies but not all?
I saw that as a step along the way to support for fully serializable
transactions. If covered by a migration path GUC which defaulted to
On Thursday 28 May 2009 04:49:19 Tom Lane wrote:
Yeah. The fundamental problem with all the practical approaches I've
heard of is that they only work for a subset of possible predicates
(possible WHERE clauses). The idea that you get true serializability
only if your queries are phrased just
Peter Eisentraut wrote:
On Thursday 28 May 2009 04:49:19 Tom Lane wrote:
Yeah. The fundamental problem with all the practical approaches I've
heard of is that they only work for a subset of possible predicates
(possible WHERE clauses). The idea that you get true serializability
only if your
On Thursday 28 May 2009 03:38:49 Tom Lane wrote:
* SET TRANSACTION ISOLATION LEVEL something-else should provide our
current snapshot-driven behavior. I don't have a strong feeling about
whether something-else should be spelled REPEATABLE READ or SNAPSHOT,
but lean slightly to the latter.
On Thursday 28 May 2009 15:24:59 Heikki Linnakangas wrote:
I don't think you need that for predicate locking. To determine if e.g
an INSERT and a SELECT conflict, you need to determine if the INSERTed
tuple matches the predicate in the SELECT. No need to deduce anything
between two predicates,
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
1. Needs to be fully spec-compliant serializable behavior. No
anomalities.
That is what the paper describes, and where I want to end up.
2. No locking that's not absolutely necessary, regardless of the
WHERE-clause used. No
Albe Laurenz laurenz.a...@wien.gv.at wrote:
Every WHERE-clause in a SELECT will add one or more checks for each
concurrent writer.
That has not been the case in any implementation of predicate locks
I've used so far. It seems that any technique with those performance
characteristics would
Peter Eisentraut pete...@gmx.net wrote:
Could someone describe concisely what behavior snapshot isolation
provides that repeatable read does?
Phantom reads are not possible in snapshot isolation. They are
allowed to occur (though not required to occur) in repeatable read.
Note that in
On Thu, May 28, 2009 at 3:40 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
2. No locking that's not absolutely necessary, regardless of the
WHERE-clause used. No table locks, no page locks. Block only on
queries/updates that would truly conflict with concurrent updates
If you do a
Greg Stark st...@enterprisedb.com wrote:
Once again, the type of scan is not relevant. it's quite possible to
have a table scan and only read some of the records, or to have an
index scan and read all the records.
You need to store some representation of the qualifiers on the scan,
On Thu, May 28, 2009 at 8:43 AM, Peter Eisentraut pete...@gmx.net wrote:
On Thursday 28 May 2009 15:24:59 Heikki Linnakangas wrote:
I don't think you need that for predicate locking. To determine if e.g
an INSERT and a SELECT conflict, you need to determine if the INSERTed
tuple matches the
On Thu, May 28, 2009 at 4:33 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Can you cite anywhere that such techniques have been successfully used
in a production environment
Well there's a reason our docs say: Such a locking system is complex
to implement and extremely expensive in
Greg Stark st...@enterprisedb.com wrote:
I would want any serialization failure to be
justifiable by simple inspection of the two transactions.
BTW, there are often three (or more) transaction involved in creating
a serialization failure, where any two of them alone would not fail.
You
Robert Haas robertmh...@gmail.com writes:
What's hard about that? INSERTs are the hard case, because the rows
you care about don't exist yet. SELECT, UPDATE, and DELETE are easy
by comparison; you can lock the actual rows at issue. Unless I'm
confused?
UPDATE isn't really any easier than
Greg Stark st...@enterprisedb.com wrote:
On Thu, May 28, 2009 at 4:33 PM, Kevin Grittner wrote:
Can you cite anywhere that such techniques have been successfully
used in a production environment
Well there's a reason our docs say: Such a locking system is
complex to implement and extremely
On Thu, May 28, 2009 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
What's hard about that? INSERTs are the hard case, because the rows
you care about don't exist yet. SELECT, UPDATE, and DELETE are easy
by comparison; you can lock the actual rows
Tom Lane t...@sss.pgh.pa.us wrote:
The fundamental problem with all the practical approaches I've
heard of is that they only work for a subset of possible predicates
(possible WHERE clauses). The idea that you get true
serializability only if your queries are phrased just so is ...
icky.
On Thu, May 28, 2009 at 11:49 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
The problem
is that the cost of a perfect predicate locking system is much
higher than the cost of letting some transaction block or roll back
for retry.
Surely that depends on how expensive it is to retry the
Greg Stark st...@enterprisedb.com wrote:
how much would it suck to find your big data load abort after 10
hours loading data? And how much if it didn't wasn't even selecting
data which your data load conflicted with.
That's certainly a fair question. The prototype implementation of the
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
so as you haven't read other
data, you would be safe in the particular case you cite.
Sorry, that's not true. If you run your bulk data load at
serializable isolation level, you could still get rolled back in this
scenario, even if you're
On Thu, May 28, 2009 at 1:33 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Now let's discuss implementation. It may well be that there is no solution
that totally satisfies all those requirements, so there's plenty of room for
various tradeoffs to discuss. I think fully
I want to try to get agreement that it would be a good idea to
implement serializable transactions, and what that would look like
from the user side. At this point, we should avoid discussions of
whether it's possible or how it would be implemented, but focus on
what that would look like and
On Wed, 2009-05-27 at 15:34 -0500, Kevin Grittner wrote:
(2) The standard requires this because it is the only cost-effective
way to ensure data integrity in some environments, particularly those
with a large number of programmers, tables, and queries; and which
have complex data integrity
Jeff Davis pg...@j-davis.com wrote:
On Wed, 2009-05-27 at 15:34 -0500, Kevin Grittner wrote:
(C) One or more GUCs will be added to control whether the new
behavior is used when serializable transaction isolation is
requested or whether, for compatibility with older PostgreSQL
releases,
On Wed, 2009-05-27 at 18:54 -0500, Kevin Grittner wrote:
I've gotten the distinct impression that some would prefer to continue
to use their existing techniques under snapshot isolation. I was sort
of assuming that they would want a GUC to default to legacy behavior
with a new setting for
Jeff Davis pg...@j-davis.com writes:
On Wed, 2009-05-27 at 18:54 -0500, Kevin Grittner wrote:
I've gotten the distinct impression that some would prefer to continue
to use their existing techniques under snapshot isolation. I was sort
of assuming that they would want a GUC to default to
On Wed, May 27, 2009 at 7:54 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Jeff Davis pg...@j-davis.com wrote:
On Wed, 2009-05-27 at 15:34 -0500, Kevin Grittner wrote:
(C) One or more GUCs will be added to control whether the new
behavior is used when serializable transaction
Jeff Davis pg...@j-davis.com wrote:
1. implementation of the paper's technique sans predicate locking,
that would avoid more serialization anomalies but not all?
I saw that as a step along the way to support for fully serializable
transactions. If covered by a migration path GUC which
Jeff Davis pg...@j-davis.com writes:
On Wed, 2009-05-27 at 20:38 -0400, Tom Lane wrote:
* Anything else you want to control should be a GUC, as long as it
doesn't affect any correctness properties.
But that still leaves out another behavior which avoids some of the
serialization anomalies
On Wed, 2009-05-27 at 19:51 -0500, Kevin Grittner wrote:
Jeff Davis pg...@j-davis.com wrote:
1. implementation of the paper's technique sans predicate locking,
that would avoid more serialization anomalies but not all?
I saw that as a step along the way to support for fully
Robert Haas robertmh...@gmail.com wrote:
I think we should introduce a new value for SET TRANSACTION
ISOLATION
LEVEL, maybe SNAPSHOT, intermediate between READ COMMITTED and
SERIALIZABLE.
The standard defines such a level, and calls it REPEATABLE READ.
Snapshot semantics are more strict
Tom Lane t...@sss.pgh.pa.us wrote:
Hmm, what I gathered was that that's not changing any basic semantic
guarantees (and therefore is okay to control as a GUC). But I
haven't read the paper so maybe I'm missing something.
The paper never suggests attempting these techniques without a
On Wed, 2009-05-27 at 20:55 -0400, Tom Lane wrote:
Hmm, what I gathered was that that's not changing any basic semantic
guarantees (and therefore is okay to control as a GUC). But I haven't
read the paper so maybe I'm missing something.
On second read of this comment:
On Wed, 2009-05-27 at 20:38 -0400, Tom Lane wrote:
A lesson that I think we've learned the hard way over the past few years
is that GUCs are fine for controlling performance issues, but you expose
yourself to all sorts of risks if you make fundamental semantics vary
depending on a GUC.
I
On Wed, May 27, 2009 at 9:00 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
I think we should introduce a new value for SET TRANSACTION
ISOLATION
LEVEL, maybe SNAPSHOT, intermediate between READ COMMITTED and
SERIALIZABLE.
The standard
On 28 May 2009, at 01:51, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
At the point where we added an escalation
to table locking for the limit, started with the table lock when we
knew it was a table scan, and locked the index range for an index
scan,
I still think you're stuck in
Greg Stark greg.st...@enterprisedb.com writes:
Without any real way to represent predicates this is all pie in the
sky. The reason we don't have predicate locking is because of this
problem which it sounds like we're no closer to solving.
Yeah. The fundamental problem with all the
--
Greg
On 28 May 2009, at 02:49, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark greg.st...@enterprisedb.com writes:
Without any real way to represent predicates this is all pie in the
sky. The reason we don't have predicate locking is because of this
problem which it sounds like we're
Greg Stark greg.st...@enterprisedb.com wrote:
Postgres supports a whole lot more scan types than just these two
and many of them use multiple indexes or indexes that don't
correspond to ranges of key values at all.
Well, certainly all of the plans I've looked at which use btree
indexes
On Wed, May 27, 2009 at 9:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark greg.st...@enterprisedb.com writes:
Without any real way to represent predicates this is all pie in the
sky. The reason we don't have predicate locking is because of this
problem which it sounds like we're no closer
Kevin Grittner wrote:
Greg Stark greg.st...@enterprisedb.com wrote:
Without any real way to represent predicates this is all pie in the
sky
And this is 180% opposite from what I just heard at PGCon should be
the focus of discussion at this point. Let's get agreement on what
would be nice
71 matches
Mail list logo