Steve,

----- Original Message -----
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 1:23 AM
Subject: RE: Performance Problems with InnoDB Row Level Locking...


> Heikki,
>
> > Next-key locking in InnoDB allows you to lock the non-existence of rows
> > and thus prevents phantom rows from appearing.
> OK, now I understand what you're getting at with phantom rows.
>
> But given the tradeoff between the inadvertant next-key deadlocking
> challenge and a the phantom rows challenge, I think I'd rather have the
> phantom rows challenge because: 1) it's not as common of a problem; and 2)
> on the few occasions when I am confronted with it I can easily prevent it
> with a table lock. The need to work around next-key deadlocking issues is
> constant (EVERY database insert, update or delete) and it seems the
solution

normally you have to be prepared to reissue your transactions. Deadlocks
generated by UNIQUE KEY checks and FOREIGN KEY checks are not the result of
next-key locking, but a general issue in row level locking. These checks
involve implicit reading of tables in connection with insert/delete/update.
Thus you can get surprising deadlocks. Proving some system deadlock-free is
difficult.

SHOW INNODB STATUS is the tool to analyze deadlock problems in 3.23.52 or
4.0.3. With it we found the spurious insert deadlocks which were removed in
4.0.3. There may still be spurious deadlocks which appear in real-world
applications and which are easy to remove by fine-tuning the next-key
locking algorithm.

> is more difficult to achieve and may eventually come back to serialization
> or table level locks which is what I'm trying to avoid to begin with. I've
> already addressed some deadlock issues with frequent commits, smaller
> transactions, and better indexes and I only want to lock tables when I
> absolutely have to.
>
> I may be wrong but it just seems to me that the next-key locking approach
> merely creates another concurrency issue and the subsequent next-key
> deadlock problem is just too significant to ignore.

We need serializability to make MySQL replication and recovery from the
binlog to work. That is why InnoDB cannot allow phantom rows. It would be a
relatively easy change in InnoDB itself to lower the isolation level so that
phantom rows would be allowed.

> Humbly,
> Steve

Regards,

Heikki

> -----Original Message-----
> From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 05, 2002 2:54 PM
> To: Orr, Steve
> Cc: [EMAIL PROTECTED]
> Subject: Re: Performance Problems with InnoDB Row Level Locking...
>
>
> Steve,
>
> ----- Original Message -----
> From: "Orr, Steve" <[EMAIL PROTECTED]>
> To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
> Sent: Thursday, September 05, 2002 11:04 PM
> Subject: RE: Performance Problems with InnoDB Row Level Locking...
>
>
> > Heikki,
> >
> > You wrote...
> > > You are getting so many deadlocks that some transactions do not pass
at
> > > all?
> > No, the transactions eventually succeed but performance suffers. Like I
> > said, this is a stress test to identify bottlenecks in database
> performance.
>
> <cut>
>
> some optimization of next-key locking is possible. I recently removed some
> spurious deadlocks in 4.0 because a big customer complained of the
problem.
>
> > ------------------------------------------------------------
> > > Consider a consistency rule: "sum of all balances in table ACCOUNT
must
> > > be 1,000,000". How do you keep that true if your database cannot block
> > > phantom rows?
> > Can't that be accomplished by the "SELECT ... FOR UPDATE" syntax?
>
> That does not block new inserts to the table in Oracle. If you do:
>
> INSERT INTO account2 SELECT * FROM account;
>
> and someone else at the same time inserts within a single transaction 2
rows
> ('Jones', 1000), ('Smith', -1000) to table account, you may end up with
> table account2 where the sum of balances is not 1,000,000, though table
> account always had 1,000,000 as the sum.
>
> The way to work around these serializability issues in Oracle is to use
> table level locks. In the 1980's, when Oracle did not yet have foreign
keys
> constraints, people used table level locks to implement referential
> integrity. You have to lock the NON-existence of child rows when you
delete
> a parent row.
>
> Next-key locking in InnoDB allows you to lock the non-existence of rows
and
> thus prevents phantom rows from appearing.
>
> > Thanks again,
> > Steve
>
> Best regards,
>
> Heikki
>
> > -----Original Message-----
> > From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
> > Sent: Thursday, September 05, 2002 1:30 PM
> > To: Orr, Steve; [EMAIL PROTECTED]
> > Subject: Re: Performance Problems with InnoDB Row Level Locking...
> >
> >
> > Steve,
> >
> > ----- Original Message -----
> > From: "Orr, Steve" <[EMAIL PROTECTED]>
> > To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > Sent: Thursday, September 05, 2002 9:49 PM
> > Subject: RE: Performance Problems with InnoDB Row Level Locking...
> >
> >
> > > Hello again Heikki and thanks for your informative reply.
> > >
> > > Regarding...
> > > > innodb_flush_log_at_trx_commit=2
> > > This is not an option as we must guarantee no lost transactions. But I
> > will
> > > test it out of curiosity just to see what the performance difference
is.
> >
> > if you want to get high performance for disk flushes, you should buy a
> disk
> > with a battery-backed cache, and check that the disk driver is aware of
> it.
> >
> > Otherwise we are constrained by the disk rotation speed, some 150 rounds
/
> > second.
> >
> > > Regarding...
> > > > InnoDB uses next-key locking to ensure serializability and that
> 'phantom
> > > > rows' do not appear. You can get lock conflicts even if the queries
> > > > seemingly would not overlap.
> > > Does this mean that InnoDB is locking the next leaf in the B-Tree?
That
> >
> > No, only the next index record in alphabetical order.
> >
> > > would explain the problem as those rows could be updated by other
> > sessions.
> > > If that's the case then I think the next-key locking architecture is
the
> > > problem because it introduces "artificial" deadlocks on heavily used
> > tables
> > > and indexes that would otherwise not occur.
> >
> > Yes, that is true.
> >
> > > (See http://portal.acm.org/citation.cfm?id=335461&dl=ACM&coll=portal#)
> > >
> > > Using the same code on the same machine I'm getting dramatically
better
> > > performance with PostgreSQL and Oracle both of which implement
> > multi-version
> > > concurrency control with an ANSI isolation level of "read committed."
I
> > > understand that this isolation level allows for unrepeatable reads but
> > this
> > > is easily overcome programatically (if needed). It seems like the
> > repeatable
> > > read isolation level isn't as practical and isn't really needed that
> > often.
> > > Based on the work arounds you listed in the coping with deadlocks
link,
> I
> > > don't see any way around my performance problem. (I had already tried
> > > transaction resubmission but it just perpetuates the problem.)
> >
> > You are getting so many deadlocks that some transactions do not pass at
> all?
> > Then best to serialize them with LOCK TABLES or the 'semaphore table'
> method
> > explained in the manual. Too much concurrency sometimes degrades
> > performance, and it is better to fall back to coarser granularity
locking.
> >
> > > If the repeatable read isolation level presents a performance problem
> that
> > I
> > > can't work around then I'd rather have the read committed isolation
> level.
> > > Are there any plans to enable the read committed isolation level in
> > InnoDB?
> > > It seems like read committed is the most commonly implemented
isolation
> > > level amongst the other database vendors so what was behind the
decision
> > to
> > > implement the repeatable read isolation level in InnoDB? Just curious.
> :-)
> >
> > No, unfortunately MySQL replication and recovery from the binlog
requires
> > that updating transactions have serializable execution. We log complete
> SQL
> > statements. They have to be performed in the exact same way in the slave
> or
> > in recovery. Consider
> >
> > INSERT INTO ... SELECT ...
> >
> > If phantom rows could appear in the result set of the SELECT, the number
> of
> > rows inserted in the slave could differ from what happened in the
master.
> >
> > > In the link you gave you state:
> > > "You can get deadlocks even in the case of transactions which just
> insert
> > or
> > > delete a single row. That is because these operations are not really
> > > 'atomic': they automatically set locks on the (possibly several) index
> > > records of the row inserted/deleted."
> > >
> > > So... if these operations are not atomic then does that mean that
MySQL
> > > still does not pass the ACID test even with InnoDB?
> >
> > InnoDB of course complies to the 'ACID' rules. What I meant with the
above
> > is that even a single insert into a table is a complex sequence of
> > operations. People often forget that several indexes may be updated,
> foreign
> > key checks are made in other tables. Deadlocks can arise in unexpected
> ways.
> >
> > Next-key locking actually makes InnoDB more 'ACID' than Oracle or
> > PostgreSQL. Phantom rows compromise the 'C' and 'I' in Oracle and
> > PostgreSQL. Consider a consistency rule: "sum of all balances in table
> > ACCOUNT must be 1,000,000". How do you keep that true if your database
> > cannot block phantom rows?
> >
> > "
> > Consistency
> > The database is transformed from one valid state to another valid state.
A
> > transaction is legal only if it obeys user-defined integrity
constraints.
> > Illegal transactions aren't allowed and, if an integrity constraint
can't
> be
> > satisfied the transaction is rolled back. For example, suppose that you
> > define a rule that postings in a discussion forum table must be tied to
a
> > valid user ID. Then you hire Joe Novice to write some admin pages. Joe
> > writes a delete-user page that doesn't bother to check whether or not
the
> > deletion will result in an orphaned discussion forum posting. Oracle
will
> > check, though, and abort any transaction that would result in you having
a
> > discussion forum posting by a deleted user.
> >
> > Isolation
> > The results of a transaction are invisible to other transactions until
the
> > transaction is complete. For example, suppose you have a page to show
new
> > users and their photographs. This page is coded in reliance on the
> > publisher's directive that there will be a mugshot for every user and
will
> > present a broken image if there is not. Jane Newuser is registering at
> your
> > site at the same time that Bill Olduser is viewing the new user page.
The
> > script processing Jane's registration does inserts into several tables:
> > users, mugshots, users_demographics. This may take some time if Jane's
> > mugshot is large. If Bill's query starts before Jane's transaction
> commits,
> > Bill won't see Jane at all on his new-users page, even if Jane's
insertion
> > into some of the tables is complete.
> > "
> >
> > > Thanks again and I'm eagerly awaiting your reply.
> > >
> > >
> > > Respectfully,
> > > Steve Orr
> >
> > Best regards,
> >
> > Heikki
> > Innobase Oy
> >
> >
> > > -----Original Message-----
> > > From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
> > > Sent: Thursday, September 05, 2002 9:05 AM
> > > To: [EMAIL PROTECTED]
> > > Subject: Re: Performance Problems with InnoDB Row Level Locking...
> > >
> > >
> > > Steve,
> > >
> > > ----- Original Message -----
> > > From: ""Orr, Steve"" <[EMAIL PROTECTED]>
> > > Newsgroups: mailing.database.mysql
> > > Sent: Thursday, September 05, 2002 5:52 PM
> > > Subject: Performance Problems with InnoDB Row Level Locking...
> > >
> > >
> > > > Background:
> > > > I've developed a simplistic Perl program to test database
performance
> > with
> > > > concurrent session queries. The queries involve inserts, updates,
and
> > > > deletes in order to test database performance in an OLTP mult-user
> ACID
> > > > compliant scenario. Obviously this is not a "real world test" but it
> > does
> > > > stress the database engine's ability to manage transactions so it is
> > > > somewhat valid for comparison purposes.
> > > >
> > > > Problem:
> > > > When I do an insert/update/delete I sporadically get the following:
> > > > ":mysql::st execute failed: Deadlock found when trying to get lock;
> Try
> > > > restarting transaction at dafunc.pm line 340..." The word "deadlock"
> is
> > > > misleading because all the database changes are based on a session
> > number
> > > > ensuring that no session is trying to change data that is also being
> > > changed
> > >
> > > InnoDB uses next-key locking to ensure serializability and that
'phantom
> > > rows' do not appear. You can get lock conflicts even if the queries
> > > seemingly would not overlap.
> > >
> > > For example,
> > >
> > > CREATE TABLE emptytable(a INT NOT NULL, PRIMARY KEY(a)) TYPE = InnoDB;
> > >
> > > user 1: SELECT * FROM emptytable WHERE a = 100 LOCK IN SHARE MODE;
> > >
> > > will make
> > >
> > > user 2: INSERT INTO emptytable VALUES (150);
> > >
> > > wait for a next-key lock on the 'supremum' of the primary index.
> > >
> > > These operations would not overlap when using so-called predicate
> locking,
> > > but that is too expensive to implement.
> > >
> > > In transactional databases deadlocks are a classic problem. Please
refer
> > to
> > > http://www.innodb.com/ibman.html#Cope_with_deadlocks.
> > >
> > > > by another session. It appears a time out is occurring before the
> shared
> > > row
> > > > level locks are acquired. Changing innodb_lock_wait_timeout doesn't
> seem
> > > to
> > > > do much. How do I tune this?
> > > >
> > > > Given table locking problems associated with MyISAM, it was thought
> that
> > > > InnoDB would perform better but in my tests performance is worse. It
> > > appears
> > > > InnoDB is taking a long time to acquire shared row level locks and
is
> > > timing
> > > > out. If that's the case then the benefit of row level locks over
table
> > > level
> > > > locks is more than offset by the internal InnoDB overhead to manage
> the
> > > > locks. Any other explanations? Any known performance issues with
> InnoDB?
> > > Any
> > > > Perl DBI driver performance issues?
> > > >
> > > > Here's a snippet of the Perl code for the curious:
> > >
> >
>
> -------------------------------------------------------------------------
> > > > ## Update by session, rand_val...
> > > > sub updSessionRand {
> > > >  eval {
> > > >   my $rtnval= 0      ;
> > > >   $estart   = time() ;
> > > >   my $dbh   = @_[1]  ;
> > > >   $sess_val = @_[2]  ;
> > > >   $sqlStmt  = "UPDATE bench_data SET text_val='updated text by
> > rand_val',
> > > >               timestamp_val=$timestamp
> > > >               WHERE sess_val = ? AND rand_val between ? AND ? ";
> > > >
> > > >   my $stmtHdl=$dbh->prepare($sqlStmt);
> > > >   $stmtHdl->execute($sess_val,$sess_val+900,$sess_val+1500) ;
> > > >   $dbh->commit();
> > > >   $edone = time();
> > > >   $totsec = $edone-$estart;
> > > >   print "Session:$sess_val, upd02, seconds:$totsec\n";
> > > >  };
> > > >  if ($@) {
> > > >     warn "Session $sess_val upd02 failed.\n $@";
> > > >     $rtnval = 1 ;
> > > >     }
> > > >  return $rtnval ;
> > > > }
> > >
> >
>
> -------------------------------------------------------------------------
> > > >
> > > > (Side Note: In a separate process I found out that the syntax
"SELECT
> > ...
> > > > FOR UPDATE" produces exclusive locks so I changed it to "SELECT ...
> LOCK
> > > IN
> > > > SHARE MODE" and that helped matters. I also tried setting the
> > transaction
> > > > isolation level to serializable but that was worse.)
> > > >
> > > > I am running MySQL version 3.23.52-max-log with Linux 2.4.18-3
> > > > (RedHat 7.3.2.96-110) on 2 PentiumIII processors with 2GB RAM.
> > > >
> > > > Here are some current innodb related my.cnf settings:
> > > > set-variable = innodb_lock_wait_timeout=300
> > > > innodb_flush_log_at_trx_commit=1
> > >
> > >
> > > You can try setting
> > >
> > > innodb_flush_log_at_trx_commit=2
> > >
> > > if you can afford losing some last transactions in a power outage or
an
> > > operating system crash.
> > >
> > >
> > > > set-variable = innodb_buffer_pool_size=384M
> > > > set-variable = innodb_additional_mem_pool_size=20M
> > > > set-variable = innodb_thread_concurrency=4
> > > >
> > > >
> > > > TIA!
> > > > Steve Orr
> > > > sql,query
> > >
> > > Best regards,
> > >
> > > Heikki
> > > Innobase Oy



---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <mysql-unsubscribe-##L=##[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to