Lewis Chan via discuss <[email protected]> writes:

> more importance to flexibility and performance, aren't there any
> customers(especially financial fields) greatly complaining the subtle
> incorrectness of mysql, leading to money loss and business failure ?
> If there are, is it enough to push mysql team to correct some
> incorrectnesses that Kyle found ?

I think there are two different meanings of "incorrectness" being mixed
here.

One is deviations from what the SQL standard says. In general, I don't think
deviating from the SQL standard's definitions of isolation levels is
directly correlated to "money loss and business failures".

Another is "returning different results than intended/documented". And while
"intended/documented" is often somewhat vaguely defined, this kind of
incorrectness _is_ complained about and _is_ taken very seriously by the
developers to fix.

As long as the queries are returning the results expected by the application
developers, the applications will perform correctly. The SQL standard in
practice plays a much weaker role in the SQL space compared to something
like the C/C++ standard.

As Kyle notes:

  "ANSI SQL’s isolation levels are bad, but some levels have caused more
  problems than others ... there is less agreement on the semantics of
  Repeatable Read."

Snapshot isolation provides a nice solution for, informally, getting SELECTs
be repeatable. Each SELECT query in a transaction reads the same snapshot of
the database, so always sees the same data, without requiring any row or
table locks.

But that does not carry over to writes, since we want our transactions to
change the current state of the database, not some old snapshot. Consider a
simple example like this, where two transactions T2 and T3 update the same
row in parallel:

  T1: BEGIN
  T1: INSERT INTO t1 VALUES (1,1), (2,2), (3,3)
  T1: COMMIT

  T2:   BEGIN
  T2:   SELECT * FROM t1                 # S1: (2,2) ?

  T3:     BEGIN
  T3:     UPDATE t1 SET b=12 where a=2
  T3:     COMMIT

  T2:   SELECT * FROM t1 WHERE a=2       # S2: (2,2) ? (2,12) ?
  T2:   UPDATE t1 SET b=-1 WHERE b>=10   # U1: update the row?
  T2:   SELECT * FROM t1 WHERE a=2       # S3: (2,2) ? (2,-1) ?
  T2:   COMMIT

From a usability point of view, what should the result of transaction T2 be,
assuming "REPEATABLE READ" mode?

The first select (S1) should return the row (2,2) obviously.

But what about (S2)? If SELECTs are supposed to be repeatable, it is
reasonable to expect it to return (2,2) also.

But the update (U1) needs to see the current row (2,12) to change it, as
would be reasonably expected by the user (and if not it would break
statement-based replication).

And the last select (S3) should reasonably be expected to return the row
(2,-1), the result of the update (U1).

Basically, if reads and writes are mixed, something has to give.

With InnoDB, if the user wants the selects (S1) and (S2) to be consistent
with the update (U2), the form SELECT ... FOR UPDATE can be used. Under the
hood, this will take read locks on the rows, and block T3 from running in
parallel with T2, possibly raising a deadlock error.

From my (limited) reading of Kyle's analysis, the discrepancies he finds all
seem to involve this basic kind of read/write mix. It would be interesting
to know if using FOR UPDATE on the SELECTs would remove these descrepancies,
or if there are more subtle issues (/bugs) remaining around this.

Should REPEATABLE READ in MariaDB/InnoDB strictly obey the SQL standard's
definition of isolation levels? Ideally, yes. But in contrast, is it useful
in practice to have the choice between higher consistency and more
concurrency using FOR UPDATE? Probably, also yes.

 - Kristian.
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to