Hello again Heikki and thanks for your informative reply.

Regarding...
> innodb_flush_log_at_trx_commit=2
This is not an option as we must guarantee no lost transactions. But I will
test it out of curiosity just to see what the performance difference is. 

Regarding...
> InnoDB uses next-key locking to ensure serializability and that 'phantom
> rows' do not appear. You can get lock conflicts even if the queries
> seemingly would not overlap.
Does this mean that InnoDB is locking the next leaf in the B-Tree? That
would explain the problem as those rows could be updated by other sessions.
If that's the case then I think the next-key locking architecture is the
problem because it introduces "artificial" deadlocks on heavily used tables
and indexes that would otherwise not occur. 
(See http://portal.acm.org/citation.cfm?id=335461&dl=ACM&coll=portal#)

Using the same code on the same machine I'm getting dramatically better
performance with PostgreSQL and Oracle both of which implement multi-version
concurrency control with an ANSI isolation level of "read committed." I
understand that this isolation level allows for unrepeatable reads but this
is easily overcome programatically (if needed). It seems like the repeatable
read isolation level isn't as practical and isn't really needed that often.
Based on the work arounds you listed in the coping with deadlocks link, I
don't see any way around my performance problem. (I had already tried
transaction resubmission but it just perpetuates the problem.) 

If the repeatable read isolation level presents a performance problem that I
can't work around then I'd rather have the read committed isolation level.
Are there any plans to enable the read committed isolation level in InnoDB?
It seems like read committed is the most commonly implemented isolation
level amongst the other database vendors so what was behind the decision to
implement the repeatable read isolation level in InnoDB? Just curious. :-)

In the link you gave you state:
"You can get deadlocks even in the case of transactions which just insert or
delete a single row. That is because these operations are not really
'atomic': they automatically set locks on the (possibly several) index
records of the row inserted/deleted."

So... if these operations are not atomic then does that mean that MySQL
still does not pass the ACID test even with InnoDB?

Thanks again and I'm eagerly awaiting your reply.


Respectfully,
Steve Orr





-----Original Message-----
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 9:05 AM
To: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...


Steve,

----- Original Message -----
From: ""Orr, Steve"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.mysql
Sent: Thursday, September 05, 2002 5:52 PM
Subject: Performance Problems with InnoDB Row Level Locking...


> Background:
> I've developed a simplistic Perl program to test database performance with
> concurrent session queries. The queries involve inserts, updates, and
> deletes in order to test database performance in an OLTP mult-user ACID
> compliant scenario. Obviously this is not a "real world test" but it does
> stress the database engine's ability to manage transactions so it is
> somewhat valid for comparison purposes.
>
> Problem:
> When I do an insert/update/delete I sporadically get the following:
> ":mysql::st execute failed: Deadlock found when trying to get lock; Try
> restarting transaction at dafunc.pm line 340..." The word "deadlock" is
> misleading because all the database changes are based on a session number
> ensuring that no session is trying to change data that is also being
changed

InnoDB uses next-key locking to ensure serializability and that 'phantom
rows' do not appear. You can get lock conflicts even if the queries
seemingly would not overlap.

For example,

CREATE TABLE emptytable(a INT NOT NULL, PRIMARY KEY(a)) TYPE = InnoDB;

user 1: SELECT * FROM emptytable WHERE a = 100 LOCK IN SHARE MODE;

will make

user 2: INSERT INTO emptytable VALUES (150);

wait for a next-key lock on the 'supremum' of the primary index.

These operations would not overlap when using so-called predicate locking,
but that is too expensive to implement.

In transactional databases deadlocks are a classic problem. Please refer to
http://www.innodb.com/ibman.html#Cope_with_deadlocks.

> by another session. It appears a time out is occurring before the shared
row
> level locks are acquired. Changing innodb_lock_wait_timeout doesn't seem
to
> do much. How do I tune this?
>
> Given table locking problems associated with MyISAM, it was thought that
> InnoDB would perform better but in my tests performance is worse. It
appears
> InnoDB is taking a long time to acquire shared row level locks and is
timing
> out. If that's the case then the benefit of row level locks over table
level
> locks is more than offset by the internal InnoDB overhead to manage the
> locks. Any other explanations? Any known performance issues with InnoDB?
Any
> Perl DBI driver performance issues?
>
> Here's a snippet of the Perl code for the curious:
> -------------------------------------------------------------------------
> ## Update by session, rand_val...
> sub updSessionRand {
>  eval {
>   my $rtnval= 0      ;
>   $estart   = time() ;
>   my $dbh   = @_[1]  ;
>   $sess_val = @_[2]  ;
>   $sqlStmt  = "UPDATE bench_data SET text_val='updated text by rand_val',
>               timestamp_val=$timestamp
>               WHERE sess_val = ? AND rand_val between ? AND ? ";
>
>   my $stmtHdl=$dbh->prepare($sqlStmt);
>   $stmtHdl->execute($sess_val,$sess_val+900,$sess_val+1500) ;
>   $dbh->commit();
>   $edone = time();
>   $totsec = $edone-$estart;
>   print "Session:$sess_val, upd02, seconds:$totsec\n";
>  };
>  if ($@) {
>     warn "Session $sess_val upd02 failed.\n $@";
>     $rtnval = 1 ;
>     }
>  return $rtnval ;
> }
> -------------------------------------------------------------------------
>
> (Side Note: In a separate process I found out that the syntax "SELECT ...
> FOR UPDATE" produces exclusive locks so I changed it to "SELECT ... LOCK
IN
> SHARE MODE" and that helped matters. I also tried setting the transaction
> isolation level to serializable but that was worse.)
>
> I am running MySQL version 3.23.52-max-log with Linux 2.4.18-3
> (RedHat 7.3.2.96-110) on 2 PentiumIII processors with 2GB RAM.
>
> Here are some current innodb related my.cnf settings:
> set-variable = innodb_lock_wait_timeout=300
> innodb_flush_log_at_trx_commit=1


You can try setting

innodb_flush_log_at_trx_commit=2

if you can afford losing some last transactions in a power outage or an
operating system crash.


> set-variable = innodb_buffer_pool_size=384M
> set-variable = innodb_additional_mem_pool_size=20M
> set-variable = innodb_thread_concurrency=4
>
>
> TIA!
> Steve Orr
> sql,query

Best regards,

Heikki
Innobase Oy

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <mysql-unsubscribe-##L=##[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to