Re: mysqldump with single-transaction option.

2014-10-08 Thread Andrew Moore
We will tend to use binary backups (Xtrabackup) for full consistent dataset
restore (think slave provisioning and disaster recovery) and logical
backups to perform single table restores in the event that a rollback may
need to occur if someone drops a table or carries out an insane update. We
will also use mydumper instead of mysqldump due to the features of
compression and encryption. Mysqldump stops being useful on full|large
datasets due to it's single-threaded-ness.



On Tue, Oct 7, 2014 at 8:35 AM, yoku ts. yoku0...@gmail.com wrote:

 Maybe no, as you knew.

  It means that after lock is released, dump is made while the read and
 write
  activity is going on.   This dump then, would be inconsistent.

 Not only binary logs, each tables in your dump is based the time when
 mysqldump began to dump *each* tables.
 It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
 and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

 I don't have a motivation for restoring its consistency..


 Regards,


 2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

  So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump
 be
  of any useful?
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:
 
   Hello,
  
   If you use any *NOT InnoDB* storage engine, you're right.
   mysqldump with --single-transaction doesn't have any consistent as you
  say.
  
   If you use InnoDB all databases and tables, your dumping process is
   protected by transaction isolation level REPEATABLE-READ.
  
  
  
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
  
   Regards,
  
  
   2014-10-07 12:52 GMT+09:00 geetanjali mehra 
 mailtogeetanj...@gmail.com
  :
  
   It seems to me that once the read lock is acquired, only the binary
 log
   coordinates are read. Soon after binary log coordinates are read, lock
  is
   released.  Is there anything else that happens here?
  
   It means that after lock is released, dump is made while the read and
   write
   activity is going on.   This dump then, would be inconsistent.  So, to
   make
   this dump a consistent one when restoring it, binary log will be
 applied
   starting from the binary log  coordinates that has been read earlier.
  
   This is what I understand. Please correct me if my understanding is
  wrong.
  
   Best Regards,
   Geetanjali Mehra
   Senior Oracle and MySQL DBA Corporate Consultant and Database Security
   Specialist
  
  
   On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green 
 shawn.l.gr...@oracle.com
  
   wrote:
  
Hello Geetanjali,
   
On 9/23/2014 7:14 AM, geetanjali mehra wrote:
   
Can anybody please mention the internals that works when we use
   mysqldump
as follows:
   
   
*mysqldump --single-transaction --all-databases 
   backup_sunday_1_PM.sql*
   
MySQL manual says:
   
This backup operation acquires a global read lock on all tables at
  the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
  this
lock
has been acquired, the binary log coordinates are read and the lock
  is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
   issued,
the backup operation may stall until those statements finish. After
   that,
the dump becomes lock-free and does not disturb reads and writes on
  the
tables.
   
Can anyone explain it more? Please.
   
   
Which part would you like to address first?
   
I have a feeling it's more about how FLUSH TABLES WITH READ LOCK
 works
   but
I want to be certain before answering.
   
Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work
 Together.
Office: Blountville, TN
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
   
  
  
  
 



Re: mysqldump with single-transaction option.

2014-10-07 Thread yoku ts.
Hello,

If you use any *NOT InnoDB* storage engine, you're right.
mysqldump with --single-transaction doesn't have any consistent as you say.

If you use InnoDB all databases and tables, your dumping process is
protected by transaction isolation level REPEATABLE-READ.

http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction

Regards,


2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 It seems to me that once the read lock is acquired, only the binary log
 coordinates are read. Soon after binary log coordinates are read, lock is
 released.  Is there anything else that happens here?

 It means that after lock is released, dump is made while the read and write
 activity is going on.   This dump then, would be inconsistent.  So, to make
 this dump a consistent one when restoring it, binary log will be applied
 starting from the binary log  coordinates that has been read earlier.

 This is what I understand. Please correct me if my understanding is wrong.

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

  Hello Geetanjali,
 
  On 9/23/2014 7:14 AM, geetanjali mehra wrote:
 
  Can anybody please mention the internals that works when we use
 mysqldump
  as follows:
 
 
  *mysqldump --single-transaction --all-databases 
 backup_sunday_1_PM.sql*
 
  MySQL manual says:
 
  This backup operation acquires a global read lock on all tables at the
  beginning of the dump (using *FLUSH TABLES WITH READ LOCK
  http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
  lock
  has been acquired, the binary log coordinates are read and the lock is
  released. If long updating statements are running when the FLUSH
  http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
 issued,
  the backup operation may stall until those statements finish. After
 that,
  the dump becomes lock-free and does not disturb reads and writes on the
  tables.
 
  Can anyone explain it more? Please.
 
 
  Which part would you like to address first?
 
  I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
 but
  I want to be certain before answering.
 
  Yours,
  --
  Shawn Green
  MySQL Senior Principal Technical Support Engineer
  Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
  Office: Blountville, TN
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Re: mysqldump with single-transaction option.

2014-10-07 Thread geetanjali mehra
So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump be
of any useful?

Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:

 Hello,

 If you use any *NOT InnoDB* storage engine, you're right.
 mysqldump with --single-transaction doesn't have any consistent as you say.

 If you use InnoDB all databases and tables, your dumping process is
 protected by transaction isolation level REPEATABLE-READ.


 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction

 Regards,


 2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 It seems to me that once the read lock is acquired, only the binary log
 coordinates are read. Soon after binary log coordinates are read, lock is
 released.  Is there anything else that happens here?

 It means that after lock is released, dump is made while the read and
 write
 activity is going on.   This dump then, would be inconsistent.  So, to
 make
 this dump a consistent one when restoring it, binary log will be applied
 starting from the binary log  coordinates that has been read earlier.

 This is what I understand. Please correct me if my understanding is wrong.

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

  Hello Geetanjali,
 
  On 9/23/2014 7:14 AM, geetanjali mehra wrote:
 
  Can anybody please mention the internals that works when we use
 mysqldump
  as follows:
 
 
  *mysqldump --single-transaction --all-databases 
 backup_sunday_1_PM.sql*
 
  MySQL manual says:
 
  This backup operation acquires a global read lock on all tables at the
  beginning of the dump (using *FLUSH TABLES WITH READ LOCK
  http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
  lock
  has been acquired, the binary log coordinates are read and the lock is
  released. If long updating statements are running when the FLUSH
  http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
 issued,
  the backup operation may stall until those statements finish. After
 that,
  the dump becomes lock-free and does not disturb reads and writes on the
  tables.
 
  Can anyone explain it more? Please.
 
 
  Which part would you like to address first?
 
  I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
 but
  I want to be certain before answering.
 
  Yours,
  --
  Shawn Green
  MySQL Senior Principal Technical Support Engineer
  Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
  Office: Blountville, TN
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 





Re: mysqldump with single-transaction option.

2014-10-07 Thread yoku ts.
Maybe no, as you knew.

 It means that after lock is released, dump is made while the read and
write
 activity is going on.   This dump then, would be inconsistent.

Not only binary logs, each tables in your dump is based the time when
mysqldump began to dump *each* tables.
It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

I don't have a motivation for restoring its consistency..


Regards,


2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump be
 of any useful?

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:

  Hello,
 
  If you use any *NOT InnoDB* storage engine, you're right.
  mysqldump with --single-transaction doesn't have any consistent as you
 say.
 
  If you use InnoDB all databases and tables, your dumping process is
  protected by transaction isolation level REPEATABLE-READ.
 
 
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
 
  Regards,
 
 
  2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com
 :
 
  It seems to me that once the read lock is acquired, only the binary log
  coordinates are read. Soon after binary log coordinates are read, lock
 is
  released.  Is there anything else that happens here?
 
  It means that after lock is released, dump is made while the read and
  write
  activity is going on.   This dump then, would be inconsistent.  So, to
  make
  this dump a consistent one when restoring it, binary log will be applied
  starting from the binary log  coordinates that has been read earlier.
 
  This is what I understand. Please correct me if my understanding is
 wrong.
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 
  wrote:
 
   Hello Geetanjali,
  
   On 9/23/2014 7:14 AM, geetanjali mehra wrote:
  
   Can anybody please mention the internals that works when we use
  mysqldump
   as follows:
  
  
   *mysqldump --single-transaction --all-databases 
  backup_sunday_1_PM.sql*
  
   MySQL manual says:
  
   This backup operation acquires a global read lock on all tables at
 the
   beginning of the dump (using *FLUSH TABLES WITH READ LOCK
   http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
 this
   lock
   has been acquired, the binary log coordinates are read and the lock
 is
   released. If long updating statements are running when the FLUSH
   http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
  issued,
   the backup operation may stall until those statements finish. After
  that,
   the dump becomes lock-free and does not disturb reads and writes on
 the
   tables.
  
   Can anyone explain it more? Please.
  
  
   Which part would you like to address first?
  
   I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
  but
   I want to be certain before answering.
  
   Yours,
   --
   Shawn Green
   MySQL Senior Principal Technical Support Engineer
   Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
   Office: Blountville, TN
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 
 



Re: mysqldump with single-transaction option.

2014-10-06 Thread shawn l.green

Hello Geetanjali,

On 9/23/2014 7:14 AM, geetanjali mehra wrote:

Can anybody please mention the internals that works when we use mysqldump
as follows:


*mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

MySQL manual says:

This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this lock
has been acquired, the binary log coordinates are read and the lock is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
the backup operation may stall until those statements finish. After that,
the dump becomes lock-free and does not disturb reads and writes on the
tables.

Can anyone explain it more? Please.



Which part would you like to address first?

I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works 
but I want to be certain before answering.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump with single-transaction option.

2014-10-06 Thread geetanjali mehra
It seems to me that once the read lock is acquired, only the binary log
coordinates are read. Soon after binary log coordinates are read, lock is
released.  Is there anything else that happens here?

It means that after lock is released, dump is made while the read and write
activity is going on.   This dump then, would be inconsistent.  So, to make
this dump a consistent one when restoring it, binary log will be applied
starting from the binary log  coordinates that has been read earlier.

This is what I understand. Please correct me if my understanding is wrong.

Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
wrote:

 Hello Geetanjali,

 On 9/23/2014 7:14 AM, geetanjali mehra wrote:

 Can anybody please mention the internals that works when we use mysqldump
 as follows:


 *mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

 MySQL manual says:

 This backup operation acquires a global read lock on all tables at the
 beginning of the dump (using *FLUSH TABLES WITH READ LOCK
 http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
 lock
 has been acquired, the binary log coordinates are read and the lock is
 released. If long updating statements are running when the FLUSH
 http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
 the backup operation may stall until those statements finish. After that,
 the dump becomes lock-free and does not disturb reads and writes on the
 tables.

 Can anyone explain it more? Please.


 Which part would you like to address first?

 I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works but
 I want to be certain before answering.

 Yours,
 --
 Shawn Green
 MySQL Senior Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




mysqldump with single-transaction option.

2014-09-23 Thread geetanjali mehra
Can anybody please mention the internals that works when we use mysqldump
as follows:


*mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

MySQL manual says:

This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this lock
has been acquired, the binary log coordinates are read and the lock is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
the backup operation may stall until those statements finish. After that,
the dump becomes lock-free and does not disturb reads and writes on the
tables.

Can anyone explain it more? Please.



Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


How to get a S-lock in a Transaction

2014-05-28 Thread N!
Hi:
I tried to get a S-lock using following SQL:
   1. start transaction;
  SELECT * FROM test.t1 where id=1;
But I found this way does not work. I changed it as following:
2. start transaction;
SELECT * FROM test.t1 where id=1 lock in share mode;


I am wondering why the first way does not get a S lock?

Re: store transaction rollback information

2013-01-21 Thread Luis Motta Campos
On 26 Jul 2012, at 21:43, James Devine wrote:

 I have a large series of mysql changes(inserts/deletes/updates) taking
 place in a transaction.  After committing there may be some times where I
 need to roll those changes back later on.  Is there an easy way of
 determining what was changed in a transaction in a way I can store it and
 rollback later?


James, 

The way you describe it sounds like you have a modeling issue with your system. 

Committed transactions are not supposed to be rolled back.

Your System Architect has to arrange things in such a way that all the 
information required to decide if a change to the database can be made 
permanent is available to the application *before* COMMIT-time. Until then, 
you're supposed to hold your transaction (and all locks resulting from it) open 
and uncommitted.

In other words: once a transaction is committed, the changes are permanent. 
Rolling it back may still be possible, but it will be complicated and extremely 
expensive, computationally speaking. I strongly recommend you to review your 
design choices.

I hope this helps.
Kind regards,
--
Luis Motta Campos
is a DBA, Foodie, and Photographer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



store transaction rollback information

2012-07-26 Thread James Devine
I have a large series of mysql changes(inserts/deletes/updates) taking
place in a transaction.  After committing there may be some times where I
need to roll those changes back later on.  Is there an easy way of
determining what was changed in a transaction in a way I can store it and
rollback later?

Thanks


About mysql++3.1.0 SET TRANSACTION ISOLATION LEVEL

2012-02-09 Thread 陈秋丰
 Hello:
   I foud a question about function of transaction.
   In this function switch not have break

Transaction::Transaction(Connection conn, IsolationLevel level,
   IsolationScope scope, bool consistent) :
conn_(conn),
finished_(true)   // don't bother rolling it back if ctor fails
{
 // Set the transaction isolation level and scope as the user wishes
 Query q(conn_.query(SET ));
 if (scope == session) q  SESSION ;
 if (scope == global)  q  GLOBAL ;
 q  TRANSACTION ISOLATION LEVEL ;
 switch (level) {
   case read_uncommitted: q  READ UNCOMMITTED;
   case read_committed:q  READ COMMITTED;
   case repeatable_read:q  REPEATABLE READ;
   case serializable:  q  SERIALIZABLE;
 }
Qiufeng Chen

陈秋丰
360平台研发部
[说明: 说明: 说明: 说明: 2-1-26 邮件格]

电话:13693389017
飞信:13693389017
邮件:chenqiuf...@360.cn
地址:北京朝阳区建国路71号惠通时代广场C座202 100025



Re: Isolation level per transaction?

2011-03-04 Thread Perrin Harkins
On Fri, Mar 4, 2011 at 1:32 AM, Angela liu yyll2...@yahoo.com wrote:
 Can Isolation levels be set per transaction?  I know isolation levels can be 
 set per session or globally, but not sure at transaction level.

Given that you can only have one transaction at a time in a session,
there is no real difference between session-level and
transaction-level.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Isolation level per transaction?

2011-03-03 Thread Angela liu
HI, Folks:

Can Isolation levels be set per transaction?  I know isolation levels can be 
set per session or globally, but not sure at transaction level.

if so , can anybody give me an example?

Thanks a lot





  

how to find no.of transaction in mysql

2010-05-08 Thread Anand
Hi ,

Is it possible to find the no.of transactions happened on the database for
the specified duration of time ? it could be for past 2/3 days.

Thanks
Anand


Re: how to find no.of transaction in mysql

2010-05-08 Thread Prabhat Kumar
Hi,
I think you can get only no. of transactions happned on the database since
it was started.

By using  command show status ;  variable Questions will give value of
no. of transactions happened on the database since database was started.

But there is way u can use, as I used to calculate query executed per
second.

 time=10
 orig=`mysql -e show status | awk '{if ($1 == Questions) print $2}'`
 sleep $time
 last=`mysql -e show status | awk '{if ($1 == Questions) print $2}'`
 diff=`expr $last - $orig`
 avg=`expr $diff / $time`
 echo $avg


Thanks,

On Sat, May 8, 2010 at 6:40 PM, Anand anand@gmail.com wrote:

 Hi ,

 Is it possible to find the no.of transactions happened on the database for
 the specified duration of time ? it could be for past 2/3 days.

 Thanks
 Anand




-- 
Best Regards,

Prabhat Kumar
MySQL DBA
Datavail-India Mumbai
Mobile : 91-9987681929
www.datavail.com

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: how to find no.of transaction in mysql

2010-05-08 Thread Prabhat Kumar
or you can use some tools, there is tons of third tools you can use.

On Sat, May 8, 2010 at 7:15 PM, Prabhat Kumar aim.prab...@gmail.com wrote:

 Hi,
 I think you can get only no. of transactions happned on the database since
 it was started.

 By using  command show status ;  variable Questions will give value of
 no. of transactions happened on the database since database was started.

 But there is way u can use, as I used to calculate query executed per
 second.

 time=10
 orig=`mysql -e show status | awk '{if ($1 == Questions) print $2}'`
 sleep $time
 last=`mysql -e show status | awk '{if ($1 == Questions) print $2}'`
 diff=`expr $last - $orig`
 avg=`expr $diff / $time`
 echo $avg


 Thanks,

 On Sat, May 8, 2010 at 6:40 PM, Anand anand@gmail.com wrote:

 Hi ,

 Is it possible to find the no.of transactions happened on the database for
 the specified duration of time ? it could be for past 2/3 days.

 Thanks
 Anand




 --
 Best Regards,

 Prabhat Kumar
 MySQL DBA
 Datavail-India Mumbai
 Mobile : 91-9987681929
 www.datavail.com

 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat




-- 
Best Regards,

Prabhat Kumar
MySQL DBA
Datavail-India Mumbai
Mobile : 91-9987681929
www.datavail.com

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: mysql transaction log

2010-05-04 Thread Nitin Mehta
I think you've got it a little wrong. In MySQL, transaction log is different 
from bin-log. Transaction logs are used only for Innodb while bin-logs are 
optional and capture data related to all the storage engines. I believe, you 
cannot read thru the transaction logs and these logs files (used in cyclic 
fashion) are only for the purpose of Instance recovery other than providing 
transaction support. Bin logs, on the other hand are used mainly for two 
purposes - replication and PTR (point-in-time recovery). A small utility called 
mysqlbinlog can read through the bin logs and generate SQL file which is very 
handy if you need to take up PTR or just want to verify any details.

Hope that helps!

Regards,
Nitin



- Original Message 
From: Angelina Paul arshup...@gmail.com
To: mysql@lists.mysql.com
Sent: Tue, May 4, 2010 3:49:35 AM
Subject: mysql transaction log

Please help me to understand more about the mysql transaction log (
mysqlbinlog) file and its contents. Will it support only the innodb or all
the storage engine types like MyISAM, InnoDB?

Thanks,

Arsh Paul



  

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



回复: mysql transaction log

2010-05-04 Thread Timo
InnoDB support transaction. MyISAM does not support trantsaction.

http://www.databasejournal.com/features/mysql/article.php/3382171/Transactions-in-MySQL.htm
  

2010-05-04



Best regards
Timo Seven
blog: http://zauc.wordpress.com
#请翻墙浏览,或者指定hosts到74.200.243.252###
twitter: https://twitter.com/zauc
##也请翻墙浏览
UNIX System Admin  MySQL DBA



发件人: Angelina Paul arshup...@gmail.com
发送时间: 2010-05-04 06:20
主 题: mysql transaction log
收件人: mysql@lists.mysql.com



Please help me to understand more about the mysql transaction log ( 
mysqlbinlog) file and its contents. Will it support only the innodb or all 
the storage engine types like MyISAM, InnoDB? 

Thanks, 

Arsh Paul 

Re: 回复: mysql transaction log

2010-05-04 Thread Claudio Nanni

MySQL does not have a thing such the Oracle transaction log.
But it has the Binary  Log used by replication,
for this reason the binary log is server level and it is decoupled 
from the storage engine concept.
Basically (if we consider the old good statement format) it is just a 
file with commands that were executed on the master,

and then they are passed to the slaves, if any.
If you have the binlog option enabled on your server (this is the only 
requirement for a server to work as a 'master')
you will have a set of log files (in binary format for compression 
basically) that will contain all statement executed on the server

that changed data, this means no select statements.
It is quite easy to imagine that if you have this logging enabled from 
the birth of the server keeping all binary logs will allow you to

'replay' logs on another server and rebuild the same situation.
To read the content of the binary logs MySQL provides the utility 
[mysqlbinlog].


Just a splash on binary logging!

Read also on MySQL manual, it is a good exercise.

Cheers,

Claudio

On 5/4/2010 8:58 AM, Timo wrote:

InnoDB support transaction. MyISAM does not support trantsaction.

http://www.databasejournal.com/features/mysql/article.php/3382171/Transactions-in-MySQL.htm

2010-05-04



Best regards
Timo Seven
blog: http://zauc.wordpress.com
#请翻墙浏览,或者指定hosts到74.200.243.252###
twitter: https://twitter.com/zauc
##也请翻墙浏览
UNIX System Admin  MySQL DBA



发件人: Angelina Paularshup...@gmail.com
发送时间: 2010-05-04 06:20
主 题: mysql transaction log
收件人: mysql@lists.mysql.com



Please help me to understand more about the mysql transaction log (
mysqlbinlog) file and its contents. Will it support only the innodb or all
the storage engine types like MyISAM, InnoDB?

Thanks,

Arsh Paul





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysql transaction log

2010-05-03 Thread Angelina Paul
Please help me to understand more about the mysql transaction log (
mysqlbinlog) file and its contents. Will it support only the innodb or all
the storage engine types like MyISAM, InnoDB?

Thanks,

Arsh Paul


High-level transaction isolation architecture of InnoDB

2010-03-26 Thread Yang Zhang
I've noticed that Innodb seems to exhibit true serializability for the
serializable transaction isolation level. Does this mean it implements
predicate locking? Also out of curiosity, is it possible to set a
snapshot isolation transaction isolation level (is Innodb implemented
using MVCC)? Thanks in advance.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: High-level transaction isolation architecture of InnoDB

2010-03-26 Thread Harrison Fisk

Hi Yang,

On Mar 26, 2010, at 4:28 PM, Yang Zhang wrote:


I've noticed that Innodb seems to exhibit true serializability for the
serializable transaction isolation level. Does this mean it implements
predicate locking?


Kinda, but not exactly.  In serializable, all reads will use shared  
locks on the tree as it accesses the rows.


It doesn't have true predicate locking, since it doesn't lock non- 
existent rows, but instead locks gaps where rows could go.  For  
example, if you do try to read the non-existent row 1000, it may also  
prevent 999 from being inserted while with true predicate locking it  
would be allowed.


In addition, it locks based on access path, so there can be additional  
locks from that as well.


This does make it truly mathematically serializable, but does have  
additional locks than would be required by 'real' predicate locking.



Also out of curiosity, is it possible to set a
snapshot isolation transaction isolation level (is Innodb implemented
using MVCC)? Thanks in advance.


Yes, it is MVCC.  InnoDB in repeatable read will use a 'snapshot' that  
is taken at the beginning of the transaction for all of the normal non- 
locking reads it performs during the transaction.


Regards,

Harrison
--
Harrison C. Fisk, Senior Principal Technical Support Engineer
MySQL @ Oracle, Inc., http://www.mysql.com/







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Dante Lorenso
All,

I am trying to create an atomic operation in MySQL that will manage a
queue.  I want to lock an item from a table for exclusive access by one of
my processing threads.  I do this by inserting the unique ID of the record I
want to reserve into my cli_lock table.  The following query is what I am
using to lock a record in my queue:

INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 HOUR)
FROM queue q
  LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type = 'parse'
WHERE l.object_id IS NULL
  AND q.status = 'parse'
ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
LIMIT 1

However, as I execute this query several times each minute from different
applications, I frequently get these messages:

DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when
trying to get lock; try restarting transaction

Am I writing my query wrong or expecting behavior that MySQL doesn't
support?

-- Dante


Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Michael Dykman
The query is probably fine..  that is just the lock doing it's job.
Take that advice literally..  when you fail with that class of
exception, delay a milli-second or two and retry.  For a large PHP
site I designed, we had that behaviour built-in: up to three attempts
waits 5, then 10 ms between trys.  In spite of 1M+ user/day we rarely
made it to the third attempt.

 - michael dykman

On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com wrote:
 All,

 I am trying to create an atomic operation in MySQL that will manage a
 queue.  I want to lock an item from a table for exclusive access by one of
 my processing threads.  I do this by inserting the unique ID of the record I
 want to reserve into my cli_lock table.  The following query is what I am
 using to lock a record in my queue:

 INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
 SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 HOUR)
 FROM queue q
  LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type = 'parse'
 WHERE l.object_id IS NULL
  AND q.status = 'parse'
 ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
 LIMIT 1

 However, as I execute this query several times each minute from different
 applications, I frequently get these messages:

 DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when
 trying to get lock; try restarting transaction

 Am I writing my query wrong or expecting behavior that MySQL doesn't
 support?

 -- Dante




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Johan De Meersman
First things first: You *are* on InnoDB, which has row-level locking instead
of table-level ?

On Mon, Feb 1, 2010 at 4:08 PM, Michael Dykman mdyk...@gmail.com wrote:

 The query is probably fine..  that is just the lock doing it's job.
 Take that advice literally..  when you fail with that class of
 exception, delay a milli-second or two and retry.  For a large PHP
 site I designed, we had that behaviour built-in: up to three attempts
 waits 5, then 10 ms between trys.  In spite of 1M+ user/day we rarely
 made it to the third attempt.

  - michael dykman

 On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com wrote:
  All,
 
  I am trying to create an atomic operation in MySQL that will manage a
  queue.  I want to lock an item from a table for exclusive access by one
 of
  my processing threads.  I do this by inserting the unique ID of the
 record I
  want to reserve into my cli_lock table.  The following query is what I
 am
  using to lock a record in my queue:
 
  INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
  SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 HOUR)
  FROM queue q
   LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type =
 'parse'
  WHERE l.object_id IS NULL
   AND q.status = 'parse'
  ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
  LIMIT 1
 
  However, as I execute this query several times each minute from different
  applications, I frequently get these messages:
 
  DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found
 when
  trying to get lock; try restarting transaction
 
  Am I writing my query wrong or expecting behavior that MySQL doesn't
  support?
 
  -- Dante
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread D. Dante Lorenso

Johan De Meersman wrote:
First things first: You *are* on InnoDB, which has row-level locking 
instead of table-level ?


Yes, both cli_lock and queue tables are InnoDB.  The server is 
running MySQL 5.1.36.


I find it strange that I would have so many of these deadlocks 
throughout a day when these queries run from 3 processes every 20 
seconds.  What's the chance that 2 scripts should be executing these 
queries simultaneously, and even if the probability exists, why is it 
causing this deadlock error each time?


If I break the query into 2 parts ... like SELECT FOR UPDATE followed by 
the INSERT/UPDATE, would that help fix the errors?


What is this error exactly, anyhow?  Where is the deadlock ... is it on 
the select or the insert?


-- Dante




On Mon, Feb 1, 2010 at 4:08 PM, Michael Dykman mdyk...@gmail.com 
mailto:mdyk...@gmail.com wrote:


The query is probably fine..  that is just the lock doing it's job.
Take that advice literally..  when you fail with that class of
exception, delay a milli-second or two and retry.  For a large PHP
site I designed, we had that behaviour built-in: up to three attempts
waits 5, then 10 ms between trys.  In spite of 1M+ user/day we rarely
made it to the third attempt.

 - michael dykman

On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com
mailto:da...@lorenso.com wrote:
  All,
 
  I am trying to create an atomic operation in MySQL that will manage a
  queue.  I want to lock an item from a table for exclusive
access by one of
  my processing threads.  I do this by inserting the unique ID of
the record I
  want to reserve into my cli_lock table.  The following query is
what I am
  using to lock a record in my queue:
 
  INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
  SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1
HOUR)
  FROM queue q
   LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type
= 'parse'
  WHERE l.object_id IS NULL
   AND q.status = 'parse'
  ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
  LIMIT 1
 
  However, as I execute this query several times each minute from
different
  applications, I frequently get these messages:
 
  DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock
found when
  trying to get lock; try restarting transaction
 
  Am I writing my query wrong or expecting behavior that MySQL doesn't
  support?
 
  -- Dante
 



--
--
D. Dante Lorenso
da...@lorenso.com
972-333-4139

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread D. Dante Lorenso

Michael Dykman wrote:

The query is probably fine..  that is just the lock doing it's job.
Take that advice literally..  when you fail with that class of
exception, delay a milli-second or two and retry.  For a large PHP
site I designed, we had that behaviour built-in: up to three attempts
waits 5, then 10 ms between trys.  In spite of 1M+ user/day we rarely
made it to the third attempt.


Sounds like the answer is that's just the way MySQL is.  I don't 
usually like those kinds of answers.  I've written similar queries in 
another DB and never got these types of errors.  Perhaps there is a 
better way to create a queue system that avoids this problem entirely? 
 I feel like if MySQL is throwing out this wanring to me, that I 
should be doing to correct it.


I have a queue with several states in it:

state1 --- processing1 -- state2 --- processing2 --- state3

I want to find a record that is in state1 and reserve the right to 
process it.  After it is done being processed, the code will set it's 
state to state2 which allows the next application to pick it up and work 
on it.  I am actually using PHP/MySQL and this problem sounds like a job 
for a message queue.  So, in essence, my solution is like a message 
queue built using MySQL tables to store and manage the queue.


Has this problem already been solved in a way I can just leverage the 
existing solution? ... er, without the deadlock issue.


Are you saying I should just ignore the message about deadlock and let 
the app run as if the message never occurred (since there's not a 
problem with seeing that message)?


-- Dante




 - michael dykman

On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com wrote:

All,

I am trying to create an atomic operation in MySQL that will manage a
queue.  I want to lock an item from a table for exclusive access by one of
my processing threads.  I do this by inserting the unique ID of the record I
want to reserve into my cli_lock table.  The following query is what I am
using to lock a record in my queue:

INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 HOUR)
FROM queue q
 LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type = 'parse'
WHERE l.object_id IS NULL
 AND q.status = 'parse'
ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
LIMIT 1

However, as I execute this query several times each minute from different
applications, I frequently get these messages:

DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when
trying to get lock; try restarting transaction

Am I writing my query wrong or expecting behavior that MySQL doesn't
support?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Michael Dykman
The SELECT FOR UPDATE is supposed to lock those rows selected..   an
operation in another connection attempting to read or modify those
rows gets an error on the lock if it is still in place.  That is that
SELECT FOR UPDATE is supposed to do.

If that is not the behaviour you want, then why are you using the lock?

 - michael dykman

On Mon, Feb 1, 2010 at 12:29 PM, D. Dante Lorenso da...@lorenso.com wrote:
 Michael Dykman wrote:

 The query is probably fine..  that is just the lock doing it's job.
 Take that advice literally..  when you fail with that class of
 exception, delay a milli-second or two and retry.  For a large PHP
 site I designed, we had that behaviour built-in: up to three attempts
 waits 5, then 10 ms between trys.  In spite of 1M+ user/day we rarely
 made it to the third attempt.

 Sounds like the answer is that's just the way MySQL is.  I don't usually
 like those kinds of answers.  I've written similar queries in another DB and
 never got these types of errors.  Perhaps there is a better way to create a
 queue system that avoids this problem entirely?  I feel like if MySQL is
 throwing out this wanring to me, that I should be doing to correct it.

 I have a queue with several states in it:

    state1 --- processing1 -- state2 --- processing2 --- state3

 I want to find a record that is in state1 and reserve the right to process
 it.  After it is done being processed, the code will set it's state to
 state2 which allows the next application to pick it up and work on it.  I am
 actually using PHP/MySQL and this problem sounds like a job for a message
 queue.  So, in essence, my solution is like a message queue built using
 MySQL tables to store and manage the queue.

 Has this problem already been solved in a way I can just leverage the
 existing solution? ... er, without the deadlock issue.

 Are you saying I should just ignore the message about deadlock and let the
 app run as if the message never occurred (since there's not a problem with
 seeing that message)?

 -- Dante



  - michael dykman

 On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com wrote:

 All,

 I am trying to create an atomic operation in MySQL that will manage a
 queue.  I want to lock an item from a table for exclusive access by one
 of
 my processing threads.  I do this by inserting the unique ID of the
 record I
 want to reserve into my cli_lock table.  The following query is what I
 am
 using to lock a record in my queue:

 INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
 SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 HOUR)
 FROM queue q
  LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type =
 'parse'
 WHERE l.object_id IS NULL
  AND q.status = 'parse'
 ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
 LIMIT 1

 However, as I execute this query several times each minute from different
 applications, I frequently get these messages:

 DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found
 when
 trying to get lock; try restarting transaction

 Am I writing my query wrong or expecting behavior that MySQL doesn't
 support?




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Jesper Wisborg Krogh

Try to run

SHOW ENGINE INNODB STATUS;

Near the top there will be some information on the latest deadlock.  
That might help you to understand what is deadlocking. Sometimes  
changing the query or changing the indexes can remove the condition  
that causes the deadlock. I don't know whether you have triggers on  
any of your tables? If so that's one place to watch for as well as  
the deadlock will show up as it is on the original query even if it  
is a trigger causing it.


Jesper

On 02/02/2010, at 6:06 AM, Michael Dykman wrote:


The SELECT FOR UPDATE is supposed to lock those rows selected..   an
operation in another connection attempting to read or modify those
rows gets an error on the lock if it is still in place.  That is that
SELECT FOR UPDATE is supposed to do.

If that is not the behaviour you want, then why are you using the  
lock?


 - michael dykman

On Mon, Feb 1, 2010 at 12:29 PM, D. Dante Lorenso  
da...@lorenso.com wrote:

Michael Dykman wrote:


The query is probably fine..  that is just the lock doing it's job.
Take that advice literally..  when you fail with that class of
exception, delay a milli-second or two and retry.  For a large PHP
site I designed, we had that behaviour built-in: up to three  
attempts
waits 5, then 10 ms between trys.  In spite of 1M+ user/day we  
rarely

made it to the third attempt.


Sounds like the answer is that's just the way MySQL is.  I don't  
usually
like those kinds of answers.  I've written similar queries in  
another DB and
never got these types of errors.  Perhaps there is a better way to  
create a
queue system that avoids this problem entirely?  I feel like if  
MySQL is
throwing out this wanring to me, that I should be doing to  
correct it.


I have a queue with several states in it:

   state1 --- processing1 -- state2 --- processing2 --- state3

I want to find a record that is in state1 and reserve the right to  
process
it.  After it is done being processed, the code will set it's  
state to
state2 which allows the next application to pick it up and work on  
it.  I am
actually using PHP/MySQL and this problem sounds like a job for a  
message
queue.  So, in essence, my solution is like a message queue built  
using

MySQL tables to store and manage the queue.

Has this problem already been solved in a way I can just leverage the
existing solution? ... er, without the deadlock issue.

Are you saying I should just ignore the message about deadlock and  
let the
app run as if the message never occurred (since there's not a  
problem with

seeing that message)?

-- Dante




 - michael dykman

On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com  
wrote:


All,

I am trying to create an atomic operation in MySQL that will  
manage a
queue.  I want to lock an item from a table for exclusive  
access by one

of
my processing threads.  I do this by inserting the unique ID of the
record I
want to reserve into my cli_lock table.  The following query  
is what I

am
using to lock a record in my queue:

INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1  
HOUR)

FROM queue q
 LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type =
'parse'
WHERE l.object_id IS NULL
 AND q.status = 'parse'
ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
LIMIT 1

However, as I execute this query several times each minute from  
different

applications, I frequently get these messages:

DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock  
found

when
trying to get lock; try restarting transaction

Am I writing my query wrong or expecting behavior that MySQL  
doesn't

support?






--
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
unsub=jes...@noggin.com.au





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

2010-02-01 Thread Madonna DeVaudreuil
May I suggest this link?  I found it useful.  I haven't looked but there 
may be more recent posts with additional information.

http://www.xaprb.com/blog/2007/09/18/how-to-debug-innodb-lock-waits/

Donna 
 



From:
Jesper Wisborg Krogh jes...@noggin.com.au
To:
mysql@lists.mysql.com
Date:
02/01/2010 03:43 PM
Subject:
Re: Serialization failure: 1213 Deadlock found when trying to get  lock; 
try restarting transaction



Try to run

SHOW ENGINE INNODB STATUS;

Near the top there will be some information on the latest deadlock. 
That might help you to understand what is deadlocking. Sometimes 
changing the query or changing the indexes can remove the condition 
that causes the deadlock. I don't know whether you have triggers on 
any of your tables? If so that's one place to watch for as well as 
the deadlock will show up as it is on the original query even if it 
is a trigger causing it.

Jesper

On 02/02/2010, at 6:06 AM, Michael Dykman wrote:

 The SELECT FOR UPDATE is supposed to lock those rows selected..   an
 operation in another connection attempting to read or modify those
 rows gets an error on the lock if it is still in place.  That is that
 SELECT FOR UPDATE is supposed to do.

 If that is not the behaviour you want, then why are you using the 
 lock?

  - michael dykman

 On Mon, Feb 1, 2010 at 12:29 PM, D. Dante Lorenso 
 da...@lorenso.com wrote:
 Michael Dykman wrote:

 The query is probably fine..  that is just the lock doing it's job.
 Take that advice literally..  when you fail with that class of
 exception, delay a milli-second or two and retry.  For a large PHP
 site I designed, we had that behaviour built-in: up to three 
 attempts
 waits 5, then 10 ms between trys.  In spite of 1M+ user/day we 
 rarely
 made it to the third attempt.

 Sounds like the answer is that's just the way MySQL is.  I don't 
 usually
 like those kinds of answers.  I've written similar queries in 
 another DB and
 never got these types of errors.  Perhaps there is a better way to 
 create a
 queue system that avoids this problem entirely?  I feel like if 
 MySQL is
 throwing out this wanring to me, that I should be doing to 
 correct it.

 I have a queue with several states in it:

state1 --- processing1 -- state2 --- processing2 --- state3

 I want to find a record that is in state1 and reserve the right to 
 process
 it.  After it is done being processed, the code will set it's 
 state to
 state2 which allows the next application to pick it up and work on 
 it.  I am
 actually using PHP/MySQL and this problem sounds like a job for a 
 message
 queue.  So, in essence, my solution is like a message queue built 
 using
 MySQL tables to store and manage the queue.

 Has this problem already been solved in a way I can just leverage the
 existing solution? ... er, without the deadlock issue.

 Are you saying I should just ignore the message about deadlock and 
 let the
 app run as if the message never occurred (since there's not a 
 problem with
 seeing that message)?

 -- Dante



  - michael dykman

 On Mon, Feb 1, 2010 at 9:33 AM, Dante Lorenso da...@lorenso.com 
 wrote:

 All,

 I am trying to create an atomic operation in MySQL that will 
 manage a
 queue.  I want to lock an item from a table for exclusive 
 access by one
 of
 my processing threads.  I do this by inserting the unique ID of the
 record I
 want to reserve into my cli_lock table.  The following query 
 is what I
 am
 using to lock a record in my queue:

 INSERT INTO cli_lock (lock_id, object_id, lock_type, expire_dt)
 SELECT $LOCK_ID, q.queue_id, 'parse', DATE_ADD(NOW(), INTERVAL 1 
 HOUR)
 FROM queue q
  LEFT JOIN cli_lock l ON l.object_id = q.queue_id AND l.lock_type =
 'parse'
 WHERE l.object_id IS NULL
  AND q.status = 'parse'
 ORDER BY q.file_size ASC, q.created ASC, q.queue_id ASC
 LIMIT 1

 However, as I execute this query several times each minute from 
 different
 applications, I frequently get these messages:

 DB ERROR: SQLSTATE[40001]: Serialization failure: 1213 Deadlock 
 found
 when
 trying to get lock; try restarting transaction

 Am I writing my query wrong or expecting behavior that MySQL 
 doesn't
 support?




 -- 
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql? 
 unsub=jes...@noggin.com.au



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=mdevaudre...@sironahealth.com


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





Can't execute the given command because you have active locked tables or an active transaction

2009-12-30 Thread Manasi Save
Hi All,I am getting Can't execute the given command
because you have active locked tables or an active transaction error when
I am trying to truncate table. I am unable to understand the error as when I am
using the mysql query browser then the same command is working fine but when I
am doing it through NetBeans then I am getting this error. Well that is what is
very strange and I am unable to solve this problem.If anyone
could just help me with this.

Thanks in advance.--Regards,
 Manasi Save 



Re: Can't execute the given command because you have active locked tables or an active transaction

2009-12-30 Thread Miguel Angel Nieto
Hi,

 I am getting Can't execute the given command because you have active locked
 tables or an active transaction error when I am trying to truncate table. I
 am unable to understand the error as when I am using the mysql query browser
 then the same command is working fine but when I am doing it through
 NetBeans then I am getting this error. Well that is what is very strange and
 I am unable to solve this problem.

Try enabling MySQL querie log:

log=/var/log/somelog.log

and execute the command from netbeans. Look at the log and execute
again from query browser. Now, look if there is any difference on the
log (perhaps netbeans execute some other queries) ;)

-- 
Lo que haría sería hacerme pasar por sordomudo y así no tendría que
hablar. Si querían decirme algo, tendrían que escribirlo en un
papelito y enseñármelo. Al final se hartarían y ya no tendría que
hablar el resto de mi vida.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Michael Dykman
That is correct.  Many db interfaces off programmatic abstractions of
these facilities, but you may certainly just issue the statments.

START TRANSACTION

INSERT that
UPDATE that

on success: COMMIT

on error: ROLLBACK

 - michael dykman



On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com wrote:
 Hello Everyone,
  I am a newbie using innodb.
 How can I implement  START TRANSACTION COMMIT ROLLBACK when I need to update
 two tables
 that are located in two different databases. Would a single  START
 TRANSACTION be sufficient ?
 Any help would be appreciated.
 TIA
 Mos




-- 
 - michael dykman
 - mdyk...@gmail.com

May you live every day of your life.
Jonathan Swift

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Martijn Tonies

Michael,

Does MySQL support multi-db transactions?

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com



That is correct.  Many db interfaces off programmatic abstractions of
these facilities, but you may certainly just issue the statments.

START TRANSACTION

INSERT that
UPDATE that

on success: COMMIT

on error: ROLLBACK

- michael dykman



On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com wrote:

Hello Everyone,
I am a newbie using innodb.
How can I implement START TRANSACTION COMMIT ROLLBACK when I need to 
update

two tables
that are located in two different databases. Would a single START
TRANSACTION be sufficient ?
Any help would be appreciated.
TIA
Mos





--
- michael dykman
- mdyk...@gmail.com

May you live every day of your life.
   Jonathan Swift

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=m.ton...@upscene.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Ye Yuan
Looks to me we should use XA transaction syntax instead. Check this:

http://dev.mysql.com/doc/refman/5.0/en/xa.html

Thanks,
YY


2009/10/28 Martijn Tonies m.ton...@upscene.com

 Michael,

 Does MySQL support multi-db transactions?

 With regards,

 Martijn Tonies
 Upscene Productions
 http://www.upscene.com

 Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
 Anywhere, MySQL, InterBase, NexusDB and Firebird!

 Database questions? Check the forum:
 http://www.databasedevelopmentforum.com




 That is correct.  Many db interfaces off programmatic abstractions of
 these facilities, but you may certainly just issue the statments.

 START TRANSACTION

 INSERT that
 UPDATE that

 on success: COMMIT

 on error: ROLLBACK

 - michael dykman



 On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com
 wrote:

 Hello Everyone,
 I am a newbie using innodb.
 How can I implement START TRANSACTION COMMIT ROLLBACK when I need to
 update
 two tables
 that are located in two different databases. Would a single START
 TRANSACTION be sufficient ?
 Any help would be appreciated.
 TIA
 Mos




 --
 - michael dykman
 - mdyk...@gmail.com

 May you live every day of your life.
   Jonathan Swift

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=m.ton...@upscene.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=yuan4...@gmail.com




Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Martijn Tonies
Ah, works for InnoDB I see.

Nice.


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com


  Looks to me we should use XA transaction syntax instead. Check this:


  http://dev.mysql.com/doc/refman/5.0/en/xa.html

  Thanks,
  YY



  2009/10/28 Martijn Tonies m.ton...@upscene.com

Michael,

Does MySQL support multi-db transactions?

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com




That is correct.  Many db interfaces off programmatic abstractions of
these facilities, but you may certainly just issue the statments.

START TRANSACTION

INSERT that
UPDATE that

on success: COMMIT

on error: ROLLBACK

- michael dykman



On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com wrote:

  Hello Everyone,
  I am a newbie using innodb.
  How can I implement START TRANSACTION COMMIT ROLLBACK when I need to 
update
  two tables
  that are located in two different databases. Would a single START
  TRANSACTION be sufficient ?
  Any help would be appreciated.
  TIA
  Mos





-- 
- michael dykman
- mdyk...@gmail.com

May you live every day of your life.
  Jonathan Swift

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql

To unsubscribe:http://lists.mysql.com/mysql?unsub=m.ton...@upscene.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=yuan4...@gmail.com





Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Paul DuBois
If all the tables are InnoDB, XA isn't needed. It doesn't matter  
whether all tables are in the same database.


On Oct 28, 2009, at 5:48 AM, Martijn Tonies wrote:


Ah, works for InnoDB I see.

Nice.


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com


 Looks to me we should use XA transaction syntax instead. Check this:


 http://dev.mysql.com/doc/refman/5.0/en/xa.html

 Thanks,
 YY



 2009/10/28 Martijn Tonies m.ton...@upscene.com

   Michael,

   Does MySQL support multi-db transactions?

   With regards,

   Martijn Tonies
   Upscene Productions
   http://www.upscene.com

   Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
   Anywhere, MySQL, InterBase, NexusDB and Firebird!

   Database questions? Check the forum:
   http://www.databasedevelopmentforum.com




   That is correct.  Many db interfaces off programmatic  
abstractions of

   these facilities, but you may certainly just issue the statments.

   START TRANSACTION

   INSERT that
   UPDATE that

   on success: COMMIT

   on error: ROLLBACK

   - michael dykman



   On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com 
 wrote:


 Hello Everyone,
 I am a newbie using innodb.
 How can I implement START TRANSACTION COMMIT ROLLBACK when I  
need to update

 two tables
 that are located in two different databases. Would a single START
 TRANSACTION be sufficient ?
 Any help would be appreciated.
 TIA
 Mos





   --
   - michael dykman
   - mdyk...@gmail.com

   May you live every day of your life.
 Jonathan Swift


--
Paul DuBois
Sun Microsystems / MySQL Documentation Team
Madison, Wisconsin, USA
www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Michael Dykman
There seems to be some confusion about 'multi-db'.Within a single
MySQL instance, assuming that all your tables are a transactional type
(InnoDB isn't the only one), you don't have to do anything special to
cross database boundaries.  XA is required if you plan to spread your
transactions out across multiple database instances.

 - michael

On Wed, Oct 28, 2009 at 10:08 AM, Paul DuBois paul.dub...@sun.com wrote:
 If all the tables are InnoDB, XA isn't needed. It doesn't matter whether all
 tables are in the same database.

 On Oct 28, 2009, at 5:48 AM, Martijn Tonies wrote:

 Ah, works for InnoDB I see.

 Nice.


 With regards,

 Martijn Tonies
 Upscene Productions
 http://www.upscene.com

 Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
 Anywhere, MySQL, InterBase, NexusDB and Firebird!

 Database questions? Check the forum:
 http://www.databasedevelopmentforum.com


  Looks to me we should use XA transaction syntax instead. Check this:


  http://dev.mysql.com/doc/refman/5.0/en/xa.html

  Thanks,
  YY



  2009/10/28 Martijn Tonies m.ton...@upscene.com

   Michael,

   Does MySQL support multi-db transactions?

   With regards,

   Martijn Tonies
   Upscene Productions
   http://www.upscene.com

   Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
   Anywhere, MySQL, InterBase, NexusDB and Firebird!

   Database questions? Check the forum:
   http://www.databasedevelopmentforum.com




   That is correct.  Many db interfaces off programmatic abstractions of
   these facilities, but you may certainly just issue the statments.

   START TRANSACTION

   INSERT that
   UPDATE that

   on success: COMMIT

   on error: ROLLBACK

   - michael dykman



   On Wed, Oct 28, 2009 at 12:07 AM, Mosaed AlZamil mosza...@gmail.com
 wrote:

     Hello Everyone,
     I am a newbie using innodb.
     How can I implement START TRANSACTION COMMIT ROLLBACK when I need to
 update
     two tables
     that are located in two different databases. Would a single START
     TRANSACTION be sufficient ?
     Any help would be appreciated.
     TIA
     Mos





   --
   - michael dykman
   - mdyk...@gmail.com

   May you live every day of your life.
     Jonathan Swift

 --
 Paul DuBois
 Sun Microsystems / MySQL Documentation Team
 Madison, Wisconsin, USA
 www.mysql.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

May you live every day of your life.
Jonathan Swift

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: START TRANSACTION COMMIT ROLLBACK

2009-10-28 Thread Martijn Tonies

There seems to be some confusion about 'multi-db'.Within a single
MySQL instance, assuming that all your tables are a transactional type
(InnoDB isn't the only one), you don't have to do anything special to
cross database boundaries.  XA is required if you plan to spread your
transactions out across multiple database instances.


Right, makes sense, thanks for clearing that up.

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



START TRANSACTION COMMIT ROLLBACK

2009-10-27 Thread Mosaed AlZamil
Hello Everyone,
 I am a newbie using innodb.
How can I implement  START TRANSACTION COMMIT ROLLBACK when I need to update
two tables
that are located in two different databases. Would a single  START
TRANSACTION be sufficient ?
Any help would be appreciated.
TIA
Mos


Re: lock wait timeout try restrating transaction

2009-08-30 Thread Shawn Green

Hello Manasi,

Manasi Save wrote:

Hi All,

Can anyone provide me any input on in what all senerios one can get this
error. I have innodb tables, I am updating one table but I am getting
error lock wait timeout try restarting transaction. Also the parameter
innodb_lock_wait_timeout is set to 50 default. what will be the effect of
increasing the limit of this parameter.



InnoDB locks entire tables only on very rare occasions. What you are 
running into is a scenario where one transaction is using the row you 
want to change in another transaction and the first transaction fails to 
complete itself (commit or rollback) within the lock wait timeout limit.


For more information on InnoDB locking, please read through:
http://dev.mysql.com/doc/refman/5.0/en/innodb-transaction-model.html

My suggestion is to use shorter-lived transactions or to raise your 
timeout value to something high enough to allow your current 
transactions to complete.


Yours,
--
Shawn Green, MySQL Senior Support Engineer
Sun Microsystems, Inc.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



lock wait timeout try restrating transaction

2009-08-28 Thread Manasi Save
Hi All,

Can anyone provide me any input on in what all senerios one can get this
error. I have innodb tables, I am updating one table but I am getting
error lock wait timeout try restarting transaction. Also the parameter
innodb_lock_wait_timeout is set to 50 default. what will be the effect of
increasing the limit of this parameter.

-
Thanks and Regards,
Manasi Save



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



developing a transaction safe interface with MySQL as database

2009-05-28 Thread Gábor Lénárt
Hei All,

I am new here (my name is Gábor Lénárt and I am from Hungary), and I
am also quite new to develop more complex applications using RDBMS,
MySQL in our case. I hope it's the right place to ask general
questions too. I have experience to create simple applications without
transaction handling or so, and also with administrating MySQL server
at least.

Now I have to develop a SOAP interface which is about querying data
from MySQL database, and also to modify. The interface may be used by
multiple clients in parallel, and it's also must be a secure solution
to do that, so of course using transactions are must for here. However
I am not so well experienced in this area. What can I do to be sure,
that starting transaction guarantees that I see a consistent snapshot
of my database (even if other requests modify it meanwhile) with mixed
SELECT/UPDATE/INSERT with also be able to read back my changes done
within my transaction. MySQL documentation is a bit mystical for me,
since it refers for Oracle, which I have no experience at all. I
guessed this will work:

SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET AUTOCOMMIT = 0;
START TRANSACTION WITH CONSISTENT SNAPSHOT;
[... my queries with lots of SELECT/INSERT/UPDATE statements ...]
COMMIT; (or ROLLBACK if there was some error during the processing)

Is it a good solution for me then? I am not so sure about isolation
levels and so. The other problem with this is error handling,. What
kind of error codes can I expect to signal there is some kind of
conflict between parallel transactions? Since the SQL error handler
must be prepared to handle other kind of SQL problems (which shouldn't
occur, but who can be sure there is no bug in the application!), and
the error codes caused by the conflicting transactions, dead locks,
and so on. Or is it enough to check the error code after issuing
COMMIT, so all errors there are about only these, and I can be sure
that errors at other SQL statements are caused by bugs in my
application?

The second problem of mine maybe even more complex (well, for me at
least). If there is a SOAP request to modify the database, basically
it describes multiple modifications in the database through multiple
steps. But I also have constraints and rules to check, but it cannot
be done in SQL level. It's because these checks are quite complex
ones, not possible to explain as simple rules like 'UNIQUE' fields and
so on, and also, the constraints and other relation rules of my
interface can be broken during the transaction steps, just they must
be checked at the end, before the COMMIT. I've already implemented
this, however I have a worrisome thought. Since my transaction sees a
consistent snapshot of the DB which was the state at the time of
issuing START TRANSACTION  I will not see if other threads of my
interface got another SOAP requests resulting breaking some
constraints I want to check within the transaction. Ok maybe my
English knowledge is a bit terrible to explain myself right, also
maybe my problems are quiter beginner types, but I would be grateful
if someone can help me with the answers.

Thanks a lot in advance,
-- 
Gábor

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Understanding Transaction Deadlocks with Innodb

2009-05-26 Thread Michael Caplan
Hi Martin,

Sorry my example wasn't clearer.  I am doing a commit or rollback
depending on the success of the overall transaction.  What I don't do is
retry parts of the transaction upon deadlock.

Thanks for pointing that out, though!

Best,

Mike



On Mon, 2009-05-25 at 16:46 -0400, Martin Gainty wrote:
 Mike-
 
 MySQL should ALWAYS perform a commit or rollback e.g.
 exec()
 commit() or rollback()
 
  The reason for this is from Page 419 of the
  MySQL 5.0 Certification Study Guide bullet point #3:
  During the course of a transaction, InnoDB may acquire row locks
  AS IT DISCOVERS THEM TO BE NECESSARY.
 
 and yes Deadlocks can be very tricky to resolve
 Martin Gainty 
 __ 
 Verzicht und Vertraulichkeitanmerkung/Note de déni et de
 confidentialité
  
 Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
 Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede
 unbefugte Weiterleitung oder Fertigung einer Kopie ist unzulaessig.
 Diese Nachricht dient lediglich dem Austausch von Informationen und
 entfaltet keine rechtliche Bindungswirkung. Aufgrund der leichten
 Manipulierbarkeit von E-Mails koennen wir keine Haftung fuer den
 Inhalt uebernehmen.
 
 Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
 destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
 l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
 est interdite. Ce message sert à l'information seulement et n'aura pas 
 n'importe quel effet légalement obligatoire. Étant donné que les email 
 peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
 aucune responsabilité pour le contenu fourni.
 
 
 
 
 
  Subject: Understanding Transaction Deadlocks with Innodb
  From: michael.cap...@henryschein.com
  To: mysql@lists.mysql.com
  Date: Mon, 25 May 2009 15:18:34 -0300
  
  Hi there,
  
  I am trying to sort through an occasional problem I am having with
  deadlocks I am facing with a series of inoodb tables:
  
  cases (PK id)
  |___ cases_workcodes (PK id, case_id / FK case_id)
  |___ cases_invoices (PK id, case_id / FK case_id)
  |___ cases_additional (PK id, case_id / FK case_id)
  |___ cases_alloys (PK id, case_id / FK case_id)
  |___ cases_enclosures (PK id, case_id / FK case_id)
  
  
  The cases table has a one-to-many relationship with the noted
 child
  tables, maintained by fully cascading foreign keys.
  
  When inserting or changing data in the cases and related tables I
 have
  code that does something like this:
  
  1. Create transaction
  2. REPLACE data in a single case as identified by a primary key 
  * The choice of using a REPLACE statement is that I want
  it to INSERT or DELETE and INSERT the case data. With
  the cascades on DELETE, case sub table data gets cleaned
  up for me automatically.
  3. Loop through each case sub types and INSERT each
  4. Close transaction
  
  
  My problem is that once in a while (almost daily) I get the
 following
  error:
  
  
  SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when
 trying to get lock; try restarting transaction
  
  
  The interesting thing is that I never get the deadlock on the
 initial
  REPLACE statement on the case. The deadlock is always reported on
 the
  INSERT statements on the sub tables. 
  
  I know I have no other processes that are locking data in the sub
 tables
  for INSERT or UPDATE beyond the scope of that single identified case
  record. However, I am not sure the scope of the transaction lock.
  
  I also know that I need to recode to catch the deadlock and try
 again,
  however I'm not sure why this would be happening in the first place.
  Part of me believes if I understood the circumstances of the
 deadlock, I
  might be able to solve this issue (in part or in whole) at the DB
 level.
  
  Any tips? I've included below data from the Innodb status output.
  
  Thanks,
  
  Mike
  
  
  
  
  LATEST DETECTED DEADLOCK
  
  090522 19:51:54
  *** (1) TRANSACTION:
  TRANSACTION 0 3102355, ACTIVE 0 sec, process no 10134, OS thread id
 119138 inserting
  mysql tables in use 1, locked 1
  LOCK WAIT 33 lock struct(s), heap size 6752, 22 row lock(s), undo
 log entries 9
  MySQL thread id 141330, query id 3658119 x.x.x.x ddx update
  INSERT INTO
  ddx800020.cases_invoices
  (id, case_id, statement_date, practice_id, invoice_date, taxes,
 total, line_items, note, payment, payment_id)
  VALUES
  ('263012', '310372', NULL, '221', '2009-05-22', '0.00', '183.75',
 'WORKCODES', NULL, '0', NULL)
  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
  RECORD LOCKS space id 6587 page no 54 n bits 1256 index
 `IDX_cases_invoices_1` of table `ddx800020`.`cases_invoices` trx id 0
 3102355 lock_mode X locks gap before rec insert intention waiting
  Record lock, heap no 983 PHYSICAL RECORD: n_fields 2; compact
 format; info bits 0
  0: len 4; hex 0004bcbe; asc ;; 1

Understanding Transaction Deadlocks with Innodb

2009-05-25 Thread Michael Caplan
Hi there,

I am trying to sort through an occasional problem I am having with
deadlocks I am facing with a series of inoodb tables:

cases (PK id)
 |___ cases_workcodes (PK id, case_id / FK case_id)
 |___ cases_invoices (PK id, case_id / FK case_id)
 |___ cases_additional (PK id, case_id / FK case_id)
 |___ cases_alloys (PK id, case_id / FK case_id)
 |___ cases_enclosures (PK id, case_id / FK case_id)


The cases table has a one-to-many relationship with the noted child
tables, maintained by fully cascading foreign keys.

When inserting or changing data in the cases and related tables I have
code that does something like this:

 1. Create transaction
 2. REPLACE data in a single case as identified by a primary key 
  * The choice of using a REPLACE statement is that I want
it to INSERT or DELETE and INSERT the case data.  With
the cascades on DELETE, case sub table data gets cleaned
up for me automatically.
 3. Loop through each case sub types and INSERT each
 4. Close transaction


My problem is that once in a while (almost daily) I get the following
error:


SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get 
lock; try restarting transaction


The interesting thing is that I never get the deadlock on the initial
REPLACE statement on the case.  The deadlock is always reported on the
INSERT statements on the sub tables.  

I know I have no other processes that are locking data in the sub tables
for INSERT or UPDATE beyond the scope of that single identified case
record.  However, I am not sure the scope of the transaction lock.

I also know that I need to recode to catch the deadlock and try again,
however I'm not sure why this would be happening in the first place.
Part of me believes if I understood the circumstances of the deadlock, I
might be able to solve this issue (in part or in whole) at the DB level.

Any tips?  I've included below data from the Innodb status output.

Thanks,

Mike




LATEST DETECTED DEADLOCK

090522 19:51:54
*** (1) TRANSACTION:
TRANSACTION 0 3102355, ACTIVE 0 sec, process no 10134, OS thread id 119138 
inserting
mysql tables in use 1, locked 1
LOCK WAIT 33 lock struct(s), heap size 6752, 22 row lock(s), undo log entries 9
MySQL thread id 141330, query id 3658119 x.x.x.x ddx update
INSERT INTO
   ddx800020.cases_invoices
   (id, case_id, statement_date, practice_id, invoice_date, 
taxes, total, line_items, note, payment, payment_id)
   VALUES
   ('263012', '310372', NULL, '221', '2009-05-22', '0.00', 
'183.75', 'WORKCODES', NULL, '0', NULL)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 6587 page no 54 n bits 1256 index `IDX_cases_invoices_1` 
of table `ddx800020`.`cases_invoices` trx id 0 3102355 lock_mode X locks gap 
before rec insert intention waiting
Record lock, heap no 983 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 4; hex 0004bcbe; asc ;; 1: len 4; hex 000401ee; asc ;;

*** (2) TRANSACTION:
TRANSACTION 0 3102341, ACTIVE 0 sec, process no 10134, OS thread id 1192675648 
inserting, thread declared inside InnoDB 1
mysql tables in use 1, locked 1
41 lock struct(s), heap size 6752, 38 row lock(s), undo log entries 18
MySQL thread id 141328, query id 3658088 x.x.x.x ddx update
INSERT INTO
   ddx800020.cases_macros
   (case_id, macro_id, dental_code, units, description, 
teeth)
   VALUES
   ('310332', 'P', '', '1', 'PFZ - PORCELAIN FUSED TO 
ZIRCONIA', '26')
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 6587 page no 54 n bits 1256 index `IDX_cases_invoices_1` 
of table `ddx800020`.`cases_invoices` trx id 0 3102341 lock mode S locks gap 
before rec
Record lock, heap no 983 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 4; hex 0004bcbe; asc ;; 1: len 4; hex 000401ee; asc ;;

*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 2056 page no 11 n bits 720 index `IDX_cases_macros_1` of 
table `ddx800020`.`cases_macros` trx id 0 3102341 lock_mode X locks gap before 
rec insert intention waiting
Record lock, heap no 436 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 4; hex 0004bc6e; ascn;; 1: len 6; hex 000cc8cf; asc   ;;

*** WE ROLL BACK TRANSACTION (1)




Please consider the environment before printing this email.


E-mail messages may contain viruses, worms, or other malicious code. By reading 
the message and opening any attachments, the recipient accepts full 
responsibility for taking protective action against such code. Henry Schein is 
not liable for any loss or damage arising from this message.

The information in this email is confidential and may be legally privileged. It 
is intended solely for the addressee(s). Access to this e-mail

MySQL 5.1 Rolling back a transaction containing Create Alter Drop table statements

2009-03-16 Thread Hatem Nassrat
Hi,

Is there any way to rollback a transaction in MySQL 5.1 that contains
Create, Alter, Drop, table statements.

i.e. is there any way to turn off the feature:

    Statements That Cause an Implicit Commit

or even remove some of the default statements that cause a commit.

I tried using the TEMPORARY KEYWORD. It is not exactly what I want.
Moreover, the ALTER syntax does not accept a TEMPORARY Keyword, and
ALTER a temporary table causes an explicit commit.


Any thoughts on this issue,

-- 
Hatem Nassrat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL 5.1 Rolling back a transaction containing Create Alter Drop table statements

2009-03-16 Thread Baron Schwartz
On Mon, Mar 16, 2009 at 10:25 AM, Hatem Nassrat hatem.nass...@gmail.com wrote:
 Hi,

 Is there any way to rollback a transaction in MySQL 5.1 that contains
 Create, Alter, Drop, table statements.

No.  Sorry.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Impact of making a stored program transactional while running under transaction manager?

2009-03-09 Thread David Karr
it's supposedly good advice in a multi-step stored procedure to have an
explicit start transaction and commit wrapping the work.  What is the
impact of doing this if the stored procedure is called from code managed by
a transaction manager.  For instance, in a JEE appserver and a transaction
manager?


Re: Impact of making a stored program transactional while running under transaction manager?

2009-03-09 Thread Al

David Karr wrote:

it's supposedly good advice in a multi-step stored procedure to have an
explicit start transaction and commit wrapping the work.  What is the
impact of doing this if the stored procedure is called from code managed by
a transaction manager.  For instance, in a JEE appserver and a transaction
manager?

  

Impact should be NONE.
If AUTOCOMMIT is already OFF then the BEGIN TRANSACION should have 0 effect,
now as to the COMMIT  - you need to darn sure what the STATE of things 
are vis-avis
your TRANSACTION Manger, as i could possibly be setting AUTOCOMMIT off 
and the COMMIT: wil turn it ON!


I think you are , at some point, likely to chop your wang off mixing 2 
different Transaction control mechanisms.


For example - Which one handles ROLLBACKS

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: view doesn't refresh inside transaction

2009-03-05 Thread Baron Schwartz
Ingo,

On Wed, Mar 4, 2009 at 8:49 AM, Ingo Weiss i...@metaversum.com wrote:
 Hi all,

 I have a view that is joining two base tables. I can update through
 the view, but insert only through the base tables. Now I am having the
 problem that seems to boil down to the following: When I insert into
 the base tables inside a transaction, the view doesn't seem to update.
 Only after the transaction is committed does the row appear in the
 view. Now I would like to avoid having to commit the transaction at
 that point. Is there any way to force a view to refresh inside a
 transaction?

Are you accessing the view and doing the inserts in separate transactions?

-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



view doesn't refresh inside transaction

2009-03-04 Thread Ingo Weiss

Hi all,

I have a view that is joining two base tables. I can update through
the view, but insert only through the base tables. Now I am having the
problem that seems to boil down to the following: When I insert into
the base tables inside a transaction, the view doesn't seem to update.
Only after the transaction is committed does the row appear in the
view. Now I would like to avoid having to commit the transaction at
that point. Is there any way to force a view to refresh inside a
transaction?

Thanks for any hint!
Ingo

Metaversum GmbH
Rungestr. 20 - 10179 Berlin - Germany
Geschaeftsfuehrer: Jochen Hummel, Dr. Mirko Caspar
Amtsgericht Berlin Charlottenburg HRB 99412 B
CONFIDENTIALITY NOTICE: The information contained in this communication is 
confidential to the sender, and is intended only for the use of the addressee. 
Unauthorized use, disclosure or copying is strictly prohibited and may be 
unlawful. If you have received this communication in error, please notify us 
immediately at the contact numbers or addresses noted herein.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Locking certain rows in a transaction

2008-08-11 Thread John Smith
Perrin Harkins wrote:
  Assuming you're using InnoDB tables, SELECT...FOR UPDATE will lock
 the rows as you describe.  It can prevent other inserts and updates to
 neighboring rows as well, depending on what isolation level you're
 running (default is REPEATABLE READ).

Thanks, in fact it even does more than I expected. When another client
tries to read a row previously selected by another client FOR UPDATE, he
will read just fine. If he tries to select it with 'FOR UPDATE', he will
have to wait.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Locking certain rows in a transaction

2008-08-09 Thread John Smith
Hi,

i'm currently experimenting with nested sets. To insert a new node,, I
need 1 SELECT, 2 UPDATE and 1 INSERT statement.
Of course all of this wii be packed into a transaction, because the table
could get corrupted if not all of the mentioned queries are executed.

Now here's the question: I need to lock certain rows, so that no other
client can read or write that rows (I want those clients to wait until the
transaction is commited or rolled back). I don't want to lock the entire
table, since that would block to many clients, which never intended to
read the critical rows.

Please don't simply post a link to the documentation. I tried to read it,
but I didn't find the solution.

Example would be nice.

Greets, peter


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Locking certain rows in a transaction

2008-08-09 Thread Perrin Harkins
On Sat, Aug 9, 2008 at 8:10 AM, John Smith [EMAIL PROTECTED] wrote:
 Now here's the question: I need to lock certain rows, so that no other
 client can read or write that rows (I want those clients to wait until the
 transaction is commited or rolled back). I don't want to lock the entire
 table, since that would block to many clients, which never intended to
 read the critical rows.

Assuming you're using InnoDB tables, SELECT...FOR UPDATE will lock
the rows as you describe.  It can prevent other inserts and updates to
neighboring rows as well, depending on what isolation level you're
running (default is REPEATABLE READ).

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb transaction not works

2008-03-21 Thread Saravanan
Hi lists,

We are running database with mixed tables myisam and innodb. I know that innodb 
supports transactions. My server is running with default

transaction-isolation=REPEATABLE READ

Whenever our cron runs stats updation scripts. It locks whole table and make 
other  sql statements which updates the table waits for long time. How can 
achieve isolation correctly make other statements proceed without waiting. And 
my script updates record by record not as bunch.

should I use transactional statements like start transaction and commit  to 
achieve ?

thanks in advance.

Saravanan




  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: innodb transaction not works

2008-03-21 Thread Rajesh Mehrotra
Hi Saravanan,

Please check http://forums.mysql.com/read.php?97,18003,18003

-Raj. 

-Original Message-
From: Saravanan [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 21, 2008 1:59 PM
To: mysql@lists.mysql.com
Subject: innodb transaction not works

Hi lists,

We are running database with mixed tables myisam and innodb. I know that
innodb supports transactions. My server is running with default

transaction-isolation=REPEATABLE READ

Whenever our cron runs stats updation scripts. It locks whole table and
make other  sql statements which updates the table waits for long time.
How can achieve isolation correctly make other statements proceed
without waiting. And my script updates record by record not as bunch.

should I use transactional statements like start transaction and
commit  to achieve ?

thanks in advance.

Saravanan




 


Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL - Transaction/Commit Question/Issue

2007-11-15 Thread bruce
Hi...

I'm considering the following issue:
 
 need to copy in db1 tbl_1 - tbl_2
  and in db2 cat - dog

so i need to perform copies of both tbls in the two databases. and i need
them to both succeed, or to both be rolled back. the copies are in two
separate databases.

any thoughts on this...

thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Rollback on a Transaction with No Updates

2007-09-19 Thread Baron Schwartz

Robert DiFalco wrote:

Is there any difference between calling rollback or commit on a
transaction that did not alter data? For example, not a read-only
transaction but a transaction that only performed read-only selects. Any
difference in performance between calling rollback or commit? I know
they are functionally the same at the high level. 


I just thought of a difference.  If you are using LOCK TABLES and UNLOCK 
TABLES, these interact differently with COMMIT and ROLLBACK.  More info:


http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rollback on a Transaction with No Updates

2007-09-18 Thread Michael Dykman
I realize that wasn't the question, but it does seem like a lot of
trouble to get the equivalent of setAutoCommit(true);

On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
 Sure, but that wasn't really the question.

 -Original Message-
 From: Michael Dykman [mailto:[EMAIL PROTECTED]
 Sent: Monday, September 17, 2007 2:56 PM
 To: Robert DiFalco
 Cc: Baron Schwartz; mysql@lists.mysql.com
 Subject: Re: Rollback on a Transaction with No Updates

 If your transaction are only 1 query deep, why use them at all?  An
 individual query is already atomic, regardless of table type/server
 mode.

  - michael dkyman


 On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
  While it is functionally equivalent I wonder if it the code paths
  taken are the same. I suppose for both commit and rollback mysql would

  have to look for any pending work, if there were none both would do
 nothing.
  That's what makes me think that there is probably no performance
  difference between the two. I ask this because my programmers like to
  do
  this:
 
  con = ...
  try
  {
 queryOnlyWith( con );
  }
  finally
  {
 con.rollback();
  }
 
  And I wanted to make sure that this would perform the same and act the

  same as issuing a commit (unless there was an exception but I'm not
  analyzing that case).
 
  -Original Message-
  From: Baron Schwartz [mailto:[EMAIL PROTECTED]
  Sent: Monday, September 17, 2007 2:36 PM
  To: Robert DiFalco
  Cc: mysql@lists.mysql.com
  Subject: Re: Rollback on a Transaction with No Updates
 
  Robert DiFalco wrote:
   Is there any difference between calling rollback or commit on a
   transaction that did not alter data? For example, not a read-only
   transaction but a transaction that only performed read-only selects.
   Any difference in performance between calling rollback or commit? I
   know they are functionally the same at the high level.
 
  The only thing I could think of was possibly rollback would leave open

  transaction and its read view if you are running in REPEATABLE READ
  isolation mode, whereas commit begins a new transaction and discards
  the read view.  But I just tested that, and both commands start a new
  transaction and discard the read view.
 
  That's a long way of saying they are functionally equivalent as far as

  I know, as long as there are no changes to discard.
 
  Baron
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.





-- 
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rollback on a Transaction with No Updates

2007-09-18 Thread Robert DiFalco
Well, assume a higher level abstraction that does not give clients to
that abstraction access to the raw connection. It only has methods like
update, search, commit, or rollback. What the connection is doing is a
kind of implementation detail.

-Original Message-
From: Michael Dykman [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 18, 2007 10:00 AM
To: Robert DiFalco
Cc: Baron Schwartz; mysql@lists.mysql.com
Subject: Re: Rollback on a Transaction with No Updates

I realize that wasn't the question, but it does seem like a lot of
trouble to get the equivalent of setAutoCommit(true);

On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
 Sure, but that wasn't really the question.

 -Original Message-
 From: Michael Dykman [mailto:[EMAIL PROTECTED]
 Sent: Monday, September 17, 2007 2:56 PM
 To: Robert DiFalco
 Cc: Baron Schwartz; mysql@lists.mysql.com
 Subject: Re: Rollback on a Transaction with No Updates

 If your transaction are only 1 query deep, why use them at all?  An 
 individual query is already atomic, regardless of table type/server 
 mode.

  - michael dkyman


 On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
  While it is functionally equivalent I wonder if it the code paths 
  taken are the same. I suppose for both commit and rollback mysql 
  would

  have to look for any pending work, if there were none both would do
 nothing.
  That's what makes me think that there is probably no performance 
  difference between the two. I ask this because my programmers like 
  to do
  this:
 
  con = ...
  try
  {
 queryOnlyWith( con );
  }
  finally
  {
 con.rollback();
  }
 
  And I wanted to make sure that this would perform the same and act 
  the

  same as issuing a commit (unless there was an exception but I'm not 
  analyzing that case).
 
  -Original Message-
  From: Baron Schwartz [mailto:[EMAIL PROTECTED]
  Sent: Monday, September 17, 2007 2:36 PM
  To: Robert DiFalco
  Cc: mysql@lists.mysql.com
  Subject: Re: Rollback on a Transaction with No Updates
 
  Robert DiFalco wrote:
   Is there any difference between calling rollback or commit on a 
   transaction that did not alter data? For example, not a read-only 
   transaction but a transaction that only performed read-only
selects.
   Any difference in performance between calling rollback or commit? 
   I know they are functionally the same at the high level.
 
  The only thing I could think of was possibly rollback would leave 
  open

  transaction and its read view if you are running in REPEATABLE READ 
  isolation mode, whereas commit begins a new transaction and discards

  the read view.  But I just tested that, and both commands start a 
  new transaction and discard the read view.
 
  That's a long way of saying they are functionally equivalent as far 
  as

  I know, as long as there are no changes to discard.
 
  Baron
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.





-- 
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rollback on a Transaction with No Updates

2007-09-17 Thread Baron Schwartz

Robert DiFalco wrote:

Is there any difference between calling rollback or commit on a
transaction that did not alter data? For example, not a read-only
transaction but a transaction that only performed read-only selects. Any
difference in performance between calling rollback or commit? I know
they are functionally the same at the high level. 


The only thing I could think of was possibly rollback would leave open 
transaction and its read view if you are running in REPEATABLE READ 
isolation mode, whereas commit begins a new transaction and discards the 
read view.  But I just tested that, and both commands start a new 
transaction and discard the read view.


That's a long way of saying they are functionally equivalent as far as I 
know, as long as there are no changes to discard.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rollback on a Transaction with No Updates

2007-09-17 Thread Robert DiFalco
While it is functionally equivalent I wonder if it the code paths taken
are the same. I suppose for both commit and rollback mysql would have to
look for any pending work, if there were none both would do nothing.
That's what makes me think that there is probably no performance
difference between the two. I ask this because my programmers like to do
this:

con = ...
try
{
   queryOnlyWith( con );
}
finally
{
   con.rollback();
}

And I wanted to make sure that this would perform the same and act the
same as issuing a commit (unless there was an exception but I'm not
analyzing that case).

-Original Message-
From: Baron Schwartz [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 17, 2007 2:36 PM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Rollback on a Transaction with No Updates

Robert DiFalco wrote:
 Is there any difference between calling rollback or commit on a 
 transaction that did not alter data? For example, not a read-only 
 transaction but a transaction that only performed read-only selects. 
 Any difference in performance between calling rollback or commit? I 
 know they are functionally the same at the high level.

The only thing I could think of was possibly rollback would leave open
transaction and its read view if you are running in REPEATABLE READ
isolation mode, whereas commit begins a new transaction and discards the
read view.  But I just tested that, and both commands start a new
transaction and discard the read view.

That's a long way of saying they are functionally equivalent as far as I
know, as long as there are no changes to discard.

Baron



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rollback on a Transaction with No Updates

2007-09-17 Thread Michael Dykman
If your transaction are only 1 query deep, why use them at all?  An
individual query is already atomic, regardless of table type/server
mode.

 - michael dkyman


On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
 While it is functionally equivalent I wonder if it the code paths taken
 are the same. I suppose for both commit and rollback mysql would have to
 look for any pending work, if there were none both would do nothing.
 That's what makes me think that there is probably no performance
 difference between the two. I ask this because my programmers like to do
 this:

 con = ...
 try
 {
queryOnlyWith( con );
 }
 finally
 {
con.rollback();
 }

 And I wanted to make sure that this would perform the same and act the
 same as issuing a commit (unless there was an exception but I'm not
 analyzing that case).

 -Original Message-
 From: Baron Schwartz [mailto:[EMAIL PROTECTED]
 Sent: Monday, September 17, 2007 2:36 PM
 To: Robert DiFalco
 Cc: mysql@lists.mysql.com
 Subject: Re: Rollback on a Transaction with No Updates

 Robert DiFalco wrote:
  Is there any difference between calling rollback or commit on a
  transaction that did not alter data? For example, not a read-only
  transaction but a transaction that only performed read-only selects.
  Any difference in performance between calling rollback or commit? I
  know they are functionally the same at the high level.

 The only thing I could think of was possibly rollback would leave open
 transaction and its read view if you are running in REPEATABLE READ
 isolation mode, whereas commit begins a new transaction and discards the
 read view.  But I just tested that, and both commands start a new
 transaction and discard the read view.

 That's a long way of saying they are functionally equivalent as far as I
 know, as long as there are no changes to discard.

 Baron



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Rollback on a Transaction with No Updates

2007-09-17 Thread Robert DiFalco
Is there any difference between calling rollback or commit on a
transaction that did not alter data? For example, not a read-only
transaction but a transaction that only performed read-only selects. Any
difference in performance between calling rollback or commit? I know
they are functionally the same at the high level. 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rollback on a Transaction with No Updates

2007-09-17 Thread Robert DiFalco
Sure, but that wasn't really the question.

-Original Message-
From: Michael Dykman [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 17, 2007 2:56 PM
To: Robert DiFalco
Cc: Baron Schwartz; mysql@lists.mysql.com
Subject: Re: Rollback on a Transaction with No Updates

If your transaction are only 1 query deep, why use them at all?  An
individual query is already atomic, regardless of table type/server
mode.

 - michael dkyman


On 9/17/07, Robert DiFalco [EMAIL PROTECTED] wrote:
 While it is functionally equivalent I wonder if it the code paths 
 taken are the same. I suppose for both commit and rollback mysql would

 have to look for any pending work, if there were none both would do
nothing.
 That's what makes me think that there is probably no performance 
 difference between the two. I ask this because my programmers like to 
 do
 this:

 con = ...
 try
 {
queryOnlyWith( con );
 }
 finally
 {
con.rollback();
 }

 And I wanted to make sure that this would perform the same and act the

 same as issuing a commit (unless there was an exception but I'm not 
 analyzing that case).

 -Original Message-
 From: Baron Schwartz [mailto:[EMAIL PROTECTED]
 Sent: Monday, September 17, 2007 2:36 PM
 To: Robert DiFalco
 Cc: mysql@lists.mysql.com
 Subject: Re: Rollback on a Transaction with No Updates

 Robert DiFalco wrote:
  Is there any difference between calling rollback or commit on a 
  transaction that did not alter data? For example, not a read-only 
  transaction but a transaction that only performed read-only selects.
  Any difference in performance between calling rollback or commit? I 
  know they are functionally the same at the high level.

 The only thing I could think of was possibly rollback would leave open

 transaction and its read view if you are running in REPEATABLE READ 
 isolation mode, whereas commit begins a new transaction and discards 
 the read view.  But I just tested that, and both commands start a new 
 transaction and discard the read view.

 That's a long way of saying they are functionally equivalent as far as

 I know, as long as there are no changes to discard.

 Baron



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



transaction problem

2007-09-05 Thread [EMAIL PROTECTED]

Hi list!
I've a problem with transaction
I initiate a transaction with start transaction. Soon i execute 
following query:
* update t1 set t=12; and by I complete I execute rollback but when I do 
it I obtain the following thing:
non-transactional Some changed tables couldn't be rolled back and 
update I execute myself updating the data

Why do it?
I'm use MySQL 4.1.20 and MyISAM tables

Thanks,

Pablo

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: transaction problem

2007-09-05 Thread Martijn Tonies
Hi,

 I've a problem with transaction
 I initiate a transaction with start transaction. Soon i execute 
 following query:
 * update t1 set t=12; and by I complete I execute rollback but when I do 
 it I obtain the following thing:
 non-transactional Some changed tables couldn't be rolled back and 
 update I execute myself updating the data
 Why do it?
 I'm use MySQL 4.1.20 and MyISAM tables

Have you been reading any documentation on Transactions in MySQL?

MyISAM tables don't support transactions!

For each table, you can assign a different storage engine, which
is the implementation that physically stores your data. Depending
on the storage engine you're using for your table, MySQL supports
different features, like transactions.

Read the docs on the different engines.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



transaction problem

2007-09-05 Thread [EMAIL PROTECTED]

Hi list,

I am going to change the type of table to InnoDB

Thank!!

Pablo

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: transaction problem

2007-09-05 Thread Baron Schwartz

Hi,

[EMAIL PROTECTED] wrote:

Hi list!
I've a problem with transaction
I initiate a transaction with start transaction. Soon i execute 
following query:
* update t1 set t=12; and by I complete I execute rollback but when I do 
it I obtain the following thing:
non-transactional Some changed tables couldn't be rolled back and 
update I execute myself updating the data

Why do it?
I'm use MySQL 4.1.20 and MyISAM tables


MyISAM doesn't support transactions.  You probably need to use InnoDB. 
(There are some other storage engines with transaction support, but they 
are not usually distributed with most MySQL servers).


There is more information on this in the MySQL manual.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transaction/locking confusion

2007-03-14 Thread Marcus Bointon

On 13 Mar 2007, at 13:44, JamesDR wrote:

With what you've provided us: You can not prevent this. You are  
running

in a transaction which is isolated from any others.


But doesn't that isolation provide atomicity, i.e. the first  
transaction to commit will act as if all of its component queries  
occurred before the first query of the second one, even if they are  
actually issued in an overlapping order? Consequently, the first  
query of the second transaction in my example should not succeed in  
finding anything. Or are transactions not atomic this way??


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of [EMAIL PROTECTED] CRM solutions
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transaction/locking confusion

2007-03-14 Thread Marcus Bointon

On 14 Mar 2007, at 18:02, Michael Dykman wrote:

SELECT * from process WHERE WHERE id = 123 AND status =  'init' FOR  
UPDATE;


-- do a bunch of other stuff ...

UPDATE process SET status = 'ready' WHERE id = 123 AND status =  
'init';


I get what you're doing here, but I don't see why it's necessary -  
the update will automatically acquire locks on matched rows while  
it's doing the update - at least that's the impression I've had from  
the docs:


A SELECT ... FOR UPDATE reads the latest available data, setting  
exclusive locks on each row it reads. Thus, it sets the same locks a  
searched SQL UPDATE would set on the rows.


UPDATE ... WHERE ... sets an exclusive next-key lock on every record  
the search encounters.


So it sounds like the select for update will effectively be the same  
as what I'm already doing, and thus suffer the same problem.


Is it just that locks don't apply outside the transaction? If  
transactions can't solve synchronisation problems between processes,  
what are they for??!


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of [EMAIL PROTECTED] CRM solutions
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transaction/locking confusion

2007-03-14 Thread Michael Dykman

The difference here, is that any row locked via the SELECT .. FOR
UPDATE will not even allows readers access to that record until, so
you will not have processes concurrently trrying to process.  This
means that, for each row, any processes acting on it are force to be
seqential each each subsequent phase needs to wait for the previous
one to complete.

- michael

On 3/14/07, Marcus Bointon [EMAIL PROTECTED] wrote:

On 14 Mar 2007, at 18:02, Michael Dykman wrote:

 SELECT * from process WHERE WHERE id = 123 AND status =  'init' FOR
 UPDATE;

 -- do a bunch of other stuff ...

 UPDATE process SET status = 'ready' WHERE id = 123 AND status =
 'init';

I get what you're doing here, but I don't see why it's necessary -
the update will automatically acquire locks on matched rows while
it's doing the update - at least that's the impression I've had from
the docs:

A SELECT ... FOR UPDATE reads the latest available data, setting
exclusive locks on each row it reads. Thus, it sets the same locks a
searched SQL UPDATE would set on the rows.

UPDATE ... WHERE ... sets an exclusive next-key lock on every record
the search encounters.

So it sounds like the select for update will effectively be the same
as what I'm already doing, and thus suffer the same problem.

Is it just that locks don't apply outside the transaction? If
transactions can't solve synchronisation problems between processes,
what are they for??!

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of [EMAIL PROTECTED] CRM solutions
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/






--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Transaction/locking confusion

2007-03-13 Thread Marcus Bointon

Hi,

(repeat posting)

I have a simple PHP function that initialises a process definition.  
To prevent it happening more than once, I'm wrapping it in a  
transaction, however, it doesn't seem to be working and I get  
multiple initialisations. In pseudocode:


BEGIN;
UPDATE process SET status = 'ready' WHERE id = 123 AND status =  
'init' LIMIT 1;

...do other stuff including some INSERTs
if other stuff is OK:
COMMIT;
else
ROLLBACK;

If I have two simultaneous processes running this script, somehow  
they are both able to initialise. I guess that if the overlapping  
transactions are isolated, then both see the process in the 'init'  
status and that the 'other stuff' part takes long enough that it's  
reasonably likely to happen. I was under the impression that the  
UPDATE inside the transaction would lock the row and prevent the  
later query from succeeding, but it seems that's not how it works.


How can I prevent this situation? Do I need to lock the row  
explicitly? Why doesn't the transaction provide sufficient isolation?


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transaction/locking confusion

2007-03-13 Thread JamesDR
Marcus Bointon wrote:
 Hi,
 
 (repeat posting)
 
 I have a simple PHP function that initialises a process definition. To
 prevent it happening more than once, I'm wrapping it in a transaction,
 however, it doesn't seem to be working and I get multiple
 initialisations. In pseudocode:
 
 BEGIN;
 UPDATE process SET status = 'ready' WHERE id = 123 AND status = 'init'
 LIMIT 1;
 ...do other stuff including some INSERTs
 if other stuff is OK:
 COMMIT;
 else
 ROLLBACK;
 
 If I have two simultaneous processes running this script, somehow they
 are both able to initialise. I guess that if the overlapping
 transactions are isolated, then both see the process in the 'init'
 status and that the 'other stuff' part takes long enough that it's
 reasonably likely to happen. I was under the impression that the UPDATE
 inside the transaction would lock the row and prevent the later query
 from succeeding, but it seems that's not how it works.
 
 How can I prevent this situation? Do I need to lock the row explicitly?
 Why doesn't the transaction provide sufficient isolation?
 
 Marcus
 --Marcus Bointon
 Synchromedia Limited: Creators of http://www.smartmessages.net/
 [EMAIL PROTECTED] | http://www.synchromedia.co.uk/

With what you've provided us: You can not prevent this. You are running
in a transaction which is isolated from any others. One way to prevent
this may be to write a pid file in your script, then check for its
existence. If it does exist then just die out (script was already running.)

I have scripts that perform tasks on regular intervals. Some times the
script runs long and may overlap into the next run time. I check for the
file's existence, if it does exist just print a message and exit. If it
does not exist, touch the file to create it and move on... deleting it
later of course.


-- 
Thanks,
James

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Maciej Dobrzanski
In news:[EMAIL PROTECTED],
Ian P. Christian [EMAIL PROTECTED] wrote:

 This database I'm dumping has something like 17 million rows, all but
 1 table (which uses FULLTEXT, and only has 3-4k rows) run innodb.
 There is only one table of any real size, and this table has all but
 about 100k of the total rows in. My understanding of this command is
 that the database should not be locked whilst this command is running.

 However, here's my problem...
 When the dump starts to read from large table, the database just
 grinds to a halt - my website running from the database just stops,
 and the dump (which I was watching progress with a privative `watch
 ls -la`) slows down a bit.

Unless the isolation level on your server is set to serializable, no locks 
should be set during the dump.

My guess is that in a given period of time your web applications use only a 
fraction of the data stored in tables. Therefore most of the needed 
information is constantly cached in the internal buffers and thus the 
queries are exectued quickly and efficiently. Since InnoDB only reads data 
through those buffers, when you request all the rows from a multi-million 
table, the buffer pool contents is overwritten by random data pages. In 
consequence the execution of most queries can no longer be fulfilled with a 
fast memory access as the data is no longer there. The workload becomes 
heavly disk-bound, which is not efficient enough for your web traffic.

Maciek



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Ian P. Christian

Filip Krejci wrote:
 Hi,

 you are right, option --single-transaction does not accquire any lock on
 your innodb tables. Backup is fully on-line due to mvcc.

 You should look for another reason of this behavior.

 1/ What says 'show full processlist' when backup is running
 2/ What says 'show engine innodb\G' when backup is running
 2/ Is your db server stopped or very slow only? (you will see in 1/)
 3/ If it's very slow
 a/ what is your mysqldump config in my.cnf
 b/ what about remote backup (mysqldump --host x.x.x.x --master-data
 --single-transaction  dump.sql) which cause separating writes on
 another box?

I tried this again and gathered some more information.

Honestly, I'm not entirely sure what I'm looking for, so this mail
contains a lot of information from 'SHOW ENGINE INNODB STATUS'.

To summaries what I can see from this, is appears that the UPDATE on
table_a is locking that table, and that simple update had taken 8
seconds by the time I stopped the dump. At this point in time there
we're 10's of queries queued up, and the site had stopped functioning
entirely. As you can see, this only starts to be a problem about 2.5
minutes into the remote dump (here shown from remote.server, 153 seconds
in). This seems to be a consistent place for the dump to be a problem -
about 1.2 gig of data has been transfered to remote.server at this
point, which is the same as my previous attempts.

I can't understand why a simple primary key based UPDATE on a table
could slow down this much (note this is not the table being dumpped at
this point, but in my setup all InnoDB databases are in one file, so as
far as the OS is concerned I guess the fact it's different tables means
nothing)

Perhaps I should use iptables to rate limit traffic between the database
server and the remote.server - thats on the assumption that this is
simply an I/O problem... however I'm not sure that's what it is, as I
said above I'm not entirely sure how to full interpret what I'm looking at
below.

The below has had 10's of queries remove, and only shows the queries I
considered relevant.

--
---TRANSACTION 0 190439971, ACTIVE 7 sec, process no 23228, OS thread id 
2296302480 starting index read

mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 320
MySQL thread id 743938, query id 22854373 localhost 127.0.0.1 dbuser 
Updating

UPDATE table_a SET impression_count = impression_count + 1 WHERE id = '93'
--- TRX HAS BEEN WAITING 7 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 52 n bits 104 index `PRIMARY` of table 
`dbname/table_a` trx id 0 190439971 lock_mode X locks rec but not gap 
waiting
Record lock, heap no 16 PHYSICAL RECORD: n_fields 12; compact format; 
info bits 0
0: len 8; hex 805d; asc ];; 1: len 6; hex 0b59e1bd; asc 
Y ;; 2: len 7; hex 048018206d; asc m;; 3: len 8; hex 
800e; asc ;; 4: len 8; hex 806d; asc m;; 5: len 
8; hex 800045e61780; asc E ;; 6: SQL NULL; 7: len 8; hex 
800964d8; asc d ;; 8: len 8; hex 89fc; asc ;; 9: len 
4; hex 8001; asc ;; 10: len 4; hex 8001; asc ;; 11: len 4; hex 
c5e4e1c4; asc ;;


--
---TRANSACTION 0 190439869, ACTIVE 8 sec, process no 23228, OS thread id 
2277473168, thread declared inside InnoDB 500

mysql tables in use 1, locked 1
2 lock struct(s), heap size 320, undo log entries 1
MySQL thread id 743922, query id 22850253 localhost 127.0.0.1 dbuser end
UPDATE table_a SET impression_count = impression_count + 1 WHERE id = '93'
---TRANSACTION 0 190435058, ACTIVE 153 sec, process no 23228, OS thread 
id 2296707984 sleeping before joining InnoDB queue

mysql tables in use 1, locked 0
MySQL thread id 743227, query id 22843484 remote.server 123.123.123.123 
root Sending data

SELECT /*!40001 SQL_NO_CACHE */ * FROM `really_large_table`
Trx read view will not see trx with id = 0 190435059, sees  0 190435059

FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 1; buffer pool: 0
152168 OS file reads, 5020745 OS file writes, 4737490 OS fsyncs
216.14 reads/s, 37081 avg bytes/read, 25.87 writes/s, 25.14 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf: size 1, free list len 5, seg size 7,
20774 inserts, 20771 merged recs, 20694 merges
Hash table size 2212699, used cells 17023, node heap has 18 buffer(s)
759.38 hash searches/s, 88.03 non-hash searches/s
---
LOG
---
Log sequence number 17 1691567773
Log flushed up to 17 1691567579
Last checkpoint at 17 1690543049
1 pending log writes, 0 pending chkp writes
4694950 log i/o's done, 25.00 log i/o's/second

Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Ian P. Christian

Filip Krejci wrote:
 Hi,

 I suppose this is really I/O problem.


You're right, it looks like it was just an I/O problem - your suggestion
was spot on. I've now managed to dump my master data, and can get my
slave back online!

Thanks a lot for your suggestion,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Filip Krejci

Hi,

I suppose this is really I/O problem.

What says vmstat during backup?

I see many fsyncs, so you have probably innodb_flush_log_at_trx_commit=1

Try to set innodb_flush_log_at_trx_commit=2

If it solve your problem and you need innodb_flush_log_at_trx_commit=1, 
install battery backed write cache on your raid. If you have it already, 
you probably need faster storage (better raid level, more disks, faster 
disks, etc...)


Maybe you can figure out some aplication write cache for impression 
counter. Something like grouping count's in shared memory, or memcache 
server and flushing them in db on some timeout.


You can also use making backup on slave as somebody mentioned before.

Filip



--
---TRANSACTION 0 190439971, ACTIVE 7 sec, process no 23228, OS thread id 
2296302480 starting index read

mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 320
MySQL thread id 743938, query id 22854373 localhost 127.0.0.1 dbuser 
Updating

UPDATE table_a SET impression_count = impression_count + 1 WHERE id = '93'
--- TRX HAS BEEN WAITING 7 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 52 n bits 104 index `PRIMARY` of table 
`dbname/table_a` trx id 0 190439971 lock_mode X locks rec but not gap 
waiting
Record lock, heap no 16 PHYSICAL RECORD: n_fields 12; compact format; 
info bits 0
0: len 8; hex 805d; asc ];; 1: len 6; hex 0b59e1bd; asc 
Y ;; 2: len 7; hex 048018206d; asc m;; 3: len 8; hex 
800e; asc ;; 4: len 8; hex 806d; asc m;; 5: len 
8; hex 800045e61780; asc E ;; 6: SQL NULL; 7: len 8; hex 
800964d8; asc d ;; 8: len 8; hex 89fc; asc ;; 9: len 
4; hex 8001; asc ;; 10: len 4; hex 8001; asc ;; 11: len 4; hex 
c5e4e1c4; asc ;;


--
---TRANSACTION 0 190439869, ACTIVE 8 sec, process no 23228, OS thread id 
2277473168, thread declared inside InnoDB 500

mysql tables in use 1, locked 1
2 lock struct(s), heap size 320, undo log entries 1
MySQL thread id 743922, query id 22850253 localhost 127.0.0.1 dbuser end
UPDATE table_a SET impression_count = impression_count + 1 WHERE id = '93'
---TRANSACTION 0 190435058, ACTIVE 153 sec, process no 23228, OS thread 
id 2296707984 sleeping before joining InnoDB queue

mysql tables in use 1, locked 0
MySQL thread id 743227, query id 22843484 remote.server 123.123.123.123 
root Sending data

SELECT /*!40001 SQL_NO_CACHE */ * FROM `really_large_table`
Trx read view will not see trx with id = 0 190435059, sees  0 190435059

FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 1; buffer pool: 0
152168 OS file reads, 5020745 OS file writes, 4737490 OS fsyncs
216.14 reads/s, 37081 avg bytes/read, 25.87 writes/s, 25.14 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf: size 1, free list len 5, seg size 7,
20774 inserts, 20771 merged recs, 20694 merges
Hash table size 2212699, used cells 17023, node heap has 18 buffer(s)
759.38 hash searches/s, 88.03 non-hash searches/s
---
LOG
---
Log sequence number 17 1691567773
Log flushed up to 17 1691567579
Last checkpoint at 17 1690543049
1 pending log writes, 0 pending chkp writes
4694950 log i/o's done, 25.00 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 611637398; in additional pool allocated 3526400
Buffer pool size 32768
Free buffers 0
Database pages 32750
Modified db pages 188
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 315389, created 2063, written 474318
489.21 reads/s, 0.59 creates/s, 2.05 writes/s
Buffer pool hit rate 951 / 1000
--
ROW OPERATIONS
--
7 queries inside InnoDB, 1 queries in queue
2 read views open inside InnoDB
Main thread process no. 23228, id 2367634320, state: sleeping
Number of rows inserted 356882, updated 1963990, deleted 293832, read 
875872021

2.05 inserts/s, 10.32 updates/s, 0.21 deletes/s, 48500.03 reads/s


Thanks again,




--
Filip Krejci [EMAIL PROTECTED]

LINUX-for a better future

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian
Recently my one and only slave went down, and stupidly I don't have a 
dump suitable for reseeding (is that's the right term...) the slave, so 
need to make a snapshot of the master database again. This time I'll 
make sure I keep this datafile for future restores should I need to - 
you live and learn.


So... I'm doing a database dump:

mysqldump --master-data --single-transaction database  dump.sql

This database I'm dumping has something like 17 million rows, all but 1 
table (which uses FULLTEXT, and only has 3-4k rows) run innodb. There is 
only one table of any real size, and this table has all but about 100k 
of the total rows in. My understanding of this command is that the 
database should not be locked whilst this command is running.


However, here's my problem...
When the dump starts to read from large table, the database just grinds 
to a halt - my website running from the database just stops, and the 
dump (which I was watching progress with a privative `watch ls -la`) 
slows down a bit.


Last time I had to  do this (for the first 'seeding' of my slave), I 
eventually gave up trying to dump from the database whilst the site 
remained live, and took the site down for 15 minutes whilst the dump 
ran.  As I'm sure you'll understand I'm not too keen on taking the 
website down again.


Any suggestions as to why my database is stopping (could be I/O related 
maybe? it's on a good RAID setup though), and what I could do about it?


Many Thanks,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Marcus Bointon

On 7 Mar 2007, at 08:44, Ian P. Christian wrote:


mysqldump --master-data --single-transaction database  dump.sql

This database I'm dumping has something like 17 million rows, all  
but 1 table (which uses FULLTEXT, and only has 3-4k rows) run  
innodb. There is only one table of any real size, and this table  
has all but about 100k of the total rows in. My understanding of  
this command is that the database should not be locked whilst this  
command is running.


--single-transaction doesn't _do_ the dump as a transaction, it  
simply wraps the dump in begin/commit statements so it's atomic when  
restoring.


If the dump is to preserve relational integrity then it has to lock  
tables or disable access (or writes/deletes can happen during the  
dump). There are two alternatives: One is to use innoDB's commercial  
hotbackup utility (which I've not used, but it's apparently 'the  
way'). The other is one of the reasons for using a slave - stop the  
slave, do the dump, restart the slave and it will catch up anything  
it missed. It helps if you can dedicate a slave for this.


I reserve the right to be completely wrong though ;^)

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Praj

Hi ,
--single-transaction will execute the same nature of mysqldump command 
with begin and end transaction. How ever the table is locked for the 
backup your site may be slow.


--
Praj


Ian P. Christian wrote:
Recently my one and only slave went down, and stupidly I don't have a 
dump suitable for reseeding (is that's the right term...) the slave, 
so need to make a snapshot of the master database again. This time 
I'll make sure I keep this datafile for future restores should I need 
to - you live and learn.


So... I'm doing a database dump:

mysqldump --master-data --single-transaction database  dump.sql

This database I'm dumping has something like 17 million rows, all but 
1 table (which uses FULLTEXT, and only has 3-4k rows) run innodb. 
There is only one table of any real size, and this table has all but 
about 100k of the total rows in. My understanding of this command is 
that the database should not be locked whilst this command is running.


However, here's my problem...
When the dump starts to read from large table, the database just 
grinds to a halt - my website running from the database just stops, 
and the dump (which I was watching progress with a privative `watch ls 
-la`) slows down a bit.


Last time I had to  do this (for the first 'seeding' of my slave), I 
eventually gave up trying to dump from the database whilst the site 
remained live, and took the site down for 15 minutes whilst the dump 
ran.  As I'm sure you'll understand I'm not too keen on taking the 
website down again.


Any suggestions as to why my database is stopping (could be I/O 
related maybe? it's on a good RAID setup though), and what I could do 
about it?


Many Thanks,



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian

Marcus Bointon wrote:

Hi Marcus :)

 On 7 Mar 2007, at 08:44, Ian P. Christian wrote:

 --single-transaction doesn't _do_ the dump as a transaction, it simply
 wraps the dump in begin/commit statements so it's atomic when restoring.

 If the dump is to preserve relational integrity then it has to lock
 tables or disable access (or writes/deletes can happen during the dump).
 There are two alternatives: One is to use innoDB's commercial hotbackup
 utility (which I've not used, but it's apparently 'the way').

I was under the impression that with multi-versioning of InnoDB, that it
wouldn't need to do a write lock?

Sorry to quote this much from 'mysqldump --help'

--master-data[=#]
This causes the binary log position and filename to be
appended to the output. If equal to 1, will print it as a
CHANGE MASTER command; if equal to 2, that command will
be prefixed with a comment symbol. This option will turn
--lock-all-tables on, unless --single-transaction is
specified too (in which case a global read lock is only
taken a short time at the beginning of the dump - don't
forget to read about --single-transaction below). In all
cases any action on logs will happen at the exact moment
of the dump.Option automatically turns --lock-tables off.

--single-transaction
Creates a consistent snapshot by dumping all tables in a
single transaction. Works ONLY for tables stored in
storage engines which support multiversioning (currently
only InnoDB does); the dump is NOT guaranteed to be
consistent for other storage engines. Option
automatically turns off --lock-tables.


I'll accept my interpritation of the above could be very wrong
however...

 The other
 is one of the reasons for using a slave - stop the slave, do the dump,
 restart the slave and it will catch up anything it missed. It helps if
 you can dedicate a slave for this.

Yes...I'm aware of this one, but alas.. this was my only slave, and it
managed to become out of sync somehow (something to do with a bug when
using 'mysqladmin kill'). Now I know that things like this happen, I'll
take weekly snapshots of the slave data, but like I said - you live and
learn :)


Whilst I'm here and talking about slaves... is it possible to have a
slave to 2 different databases on 2 different hosts?

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Filip Krejci

Hi,

you are right, option --single-transaction does not accquire any lock on 
your innodb tables. Backup is fully on-line due to mvcc.


You should look for another reason of this behavior.

1/ What says 'show full processlist' when backup is running
2/ What says 'show engine innodb\G' when backup is running
2/ Is your db server stopped or very slow only? (you will see in 1/)
3/ If it's very slow
   a/ what is your mysqldump config in my.cnf
   b/ what about remote backup (mysqldump --host x.x.x.x --master-data 
--single-transaction  dump.sql) which cause separating writes on 
another box?




--
Filip Krejci [EMAIL PROTECTED]

Vini, vidi, Linux!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian

Filip Krejci wrote:

Hi,

you are right, option --single-transaction does not accquire any lock on 
your innodb tables. Backup is fully on-line due to mvcc.


You should look for another reason of this behavior.

1/ What says 'show full processlist' when backup is running


It shows mostly inserts/updates into tables, these tables being tables 
other then the huge 16-17 million row one.  These tables are used for 
session storage, and as such, if this block (which goes on as almost the 
first thing my site does), then no SELECT statements for site content 
are executed



2/ What says 'show engine innodb\G' when backup is running


I've actually not checked - I shall do that and report shortly - it's 
nearly midday for me, and that means lots of traffic on the site, not a 
good time to be playing :)



2/ Is your db server stopped or very slow only? (you will see in 1/)


I *think* it's slow to the point where it's unusable.  Until the backup 
gets round to the huge table, inserts/updates/selects are all going on 
just fine,.

3/ If it's very slow
   a/ what is your mysqldump config in my.cnf


[mysqldump]
quick
max_allowed_packet  = 16M

that's all there is... I'm not sure what hte 'quick' does, I'll go and 
check the manual on that, perhaps that's the issue.  I'm guessing I 
might have more luck increasing the packet size?


   b/ what about remote backup (mysqldump --host x.x.x.x --master-data 
--single-transaction  dump.sql) which cause separating writes on 
another box?


I did do this under the assumption it could simply be an I/O problem - 
however the problem persists.  It might be because the network 
connection between the two hosts is pretty fast



Thanks Filip!

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Marcus Bointon


On 7 Mar 2007, at 09:30, Ian P. Christian wrote:


--single-transaction
Creates a consistent snapshot by dumping all tables in a
single transaction. Works ONLY for tables stored in
storage engines which support multiversioning (currently
only InnoDB does); the dump is NOT guaranteed to be
consistent for other storage engines. Option
automatically turns off --lock-tables.


I'll accept my interpritation of the above could be very wrong
however...


That does indeed sound promising - however, you said that not all  
your tables are InnoDB, so maybe it just gets ignored?



Whilst I'm here and talking about slaves... is it possible to have a
slave to 2 different databases on 2 different hosts?


Apparently not: http://www.nabble.com/one-slave-with-multiple-masters- 
t2063846.html


Having said that, I don't see any reason you couldn't run multiple  
instances of mysqld on one machine, perhaps on different ports or  
interfaces.


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Filip Krejci

I have few information, but i suppose that you are on performance border
of your db server. So you haven't reserve for doing backup.

Send some few rows of command vmstat 1, before backup process and
through backup process.

How are these numbers:
 - queries per second ?
 - updates / selects rate ?

What is you settings of variables ?
 - sync_binlog
 - innodb_flush_log_at_trx_commit

Have you raid with write cache enabled ?

What are box params?

Data size ?

Index size ?

'show engine innodb status' isn't demanding query and it could say very
usefull informations about state of your server.




--
Filip Krejci [EMAIL PROTECTED]

Linux - und Spass dabei!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Transaction/locking confusion

2007-03-05 Thread Marcus Bointon

Hi,

I have a simple PHP function that initialises a process definition.  
To prevent it happening more than once, I'm wrapping it in a  
transaction, however, it doesn't seem to be working and I get  
multiple initialisations. In pseudocode:


BEGIN;
UPDATE process SET status = 'ready' WHERE id = 123 AND status =  
'init' LIMIT 1;

...do other stuff including some INSERTs
if other stuff is OK:
COMMIT;
else
ROLLBACK;

If I have two simultaneous processes running this script, somehow  
they are both able to initialise. I guess that if the overlapping  
transactions are isolated, then both see the process in the 'init'  
status and that the 'other stuff' part takes long enough that it's  
reasonably likely to happen. I was under the impression that the  
UPDATE inside the transaction would lock the row and prevent the  
later query from succeeding, but it seems that's not how it works.


How can I prevent this situation? Do I need to lock the row  
explicitly? Why doesn't the transaction provide sufficient isolation?


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Transaction/locking confusion

2007-03-05 Thread Marcus Bointon

Hi,

I have a simple PHP function that initialises a process definition.  
To prevent it happening more than once, I'm wrapping it in a  
transaction, however, it doesn't seem to be working and I get  
multiple initialisations. In pseudocode:


BEGIN;
UPDATE process SET status = 'ready' WHERE id = 123 AND status =  
'init' LIMIT 1;

...do other stuff including some INSERTs
if other stuff is OK:
COMMIT;
else
ROLLBACK;

If I have two simultaneous processes running this script, somehow  
they are both able to initialise. I guess that if the overlapping  
transactions are isolated, then both see the process in the 'init'  
status and that the 'other stuff' part takes long enough that it's  
reasonably likely to happen. I was under the impression that the  
UPDATE inside the transaction would lock the row and prevent the  
later query from succeeding, but it seems that's not how it works.


How can I prevent this situation? Do I need to lock the row  
explicitly? Why doesn't the transaction provide sufficient isolation?


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Sequence ID generation transaction-safe?

2006-12-16 Thread Frederic Wenzel

Hi,

I am using a sequence table as suggested in
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#id2887015

in order to generate new, unique IDs for a table where I have a
primary key spanning multiple columns and still need a unique numeric
part without being able to use an auto-increment table.

I am using UPDATE translations_seq SET id=LAST_INSERT_ID(id+1) and
then I fetch my newest ID with select id from translations_seq.


While this method is described in the manual as multi-user safe I
was wondering if this was also transaction safe? When two users start
a transaction at the same time (and don't commit it yet) will they get
different IDs? Because the UPDATE statement will not take place yet
until COMMITing it, I am unsure if it will actually hand out different
IDs for both of the transactions.

Can anyone enlighten me?

Thanks
Fred

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Sequence ID generation transaction-safe?

2006-12-16 Thread Eric Bergen

Hi Frederic,

Update then select on a single row table is transaction safe. If two
users start a transaction and issue update queries the first query to
execute will set a lock on that row. The second query will then block
on the update waiting to obtain the same lock. In innodb row locks are
not released until a transaction commits. This means that the
transaction with the successful update can then issue another select
to fetch the new row id while the first transaction is still waiting
on the row lock. When the first transaction commits the row lock will
be freed allowing the section transaction to obtain the lock, update,
and select the next number in the sequence.

I'm curious why you are using a sequence table to generate unique ids
for another table. Why not just change the other table to have an
auto_increment primary key and a secondary unique key to replace the
current primary key?

Innodb uses a special mechanism to allocate auto_increment ids that is
much faster than a sequence table...


-Eric


On 12/16/06, Frederic Wenzel [EMAIL PROTECTED] wrote:

Hi,

I am using a sequence table as suggested in
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#id2887015

in order to generate new, unique IDs for a table where I have a
primary key spanning multiple columns and still need a unique numeric
part without being able to use an auto-increment table.

I am using UPDATE translations_seq SET id=LAST_INSERT_ID(id+1) and
then I fetch my newest ID with select id from translations_seq.


While this method is described in the manual as multi-user safe I
was wondering if this was also transaction safe? When two users start
a transaction at the same time (and don't commit it yet) will they get
different IDs? Because the UPDATE statement will not take place yet
until COMMITing it, I am unsure if it will actually hand out different
IDs for both of the transactions.

Can anyone enlighten me?

Thanks
Fred

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-29 Thread Nico Sabbi

Mike Kruckenberg wrote:



mysql SET @staff_id = LAST_INSERT_ID();
Query OK, 0 rows affected (0.01 sec)


I don't know if this behaviour has changed in later versions of mysql,
but using session variables, although lovely, was the quickest way to
break replication (at least up to and including 4.0.27)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Andre Matos
Hi List,

Let's suppose I have these two tables:

CREATE TABLE `changes` (
  `ID` int(12) unsigned NOT NULL auto_increment,
  `Key` varchar(25) collate latin1_general_cs NOT NULL default '',
  `Table` varchar(25) collate latin1_general_cs NOT NULL default '',
  `Value` text collate latin1_general_cs NOT NULL default '',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_general_cs
AUTO_INCREMENT=1;


CREATE TABLE `staff` (
  `ID` int(3) unsigned NOT NULL auto_increment,
  `Name` varchar(35) collate latin1_general_cs NOT NULL default '',
  PRIMARY KEY  (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_general_cs
AUTO_INCREMENT=1;

The idea is to have a audit trail to record the changes made. So, I want to
insert a new record in the staff table and right after this, insert a
record in the changes table.

SET AUTOCOMMIT=0;
START TRANSACTION;
INSERT INTO staff (`Name`) VALUES ('ABC');
INSERT INTO changes (`Key`, `Table`, `Value`) VALUES (LAST_INSERT_ID(),
'staff', 'ABC');
COMMIT;
SET AUTOCOMMIT=1;

This works fine in my test environment, however what about many users doing
at the same time. Does the LAST_INSERT_ID() get the correct ID for each
user?

Is there a better way to do this or this is fine? I will be using this with
PHP4.

Thanks for any help.

Andre




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Mike Kruckenberg

Andre Matos wrote:

The idea is to have a audit trail to record the changes made. So, I want to
insert a new record in the staff table and right after this, insert a
record in the changes table.

SET AUTOCOMMIT=0;
START TRANSACTION;
INSERT INTO staff (`Name`) VALUES ('ABC');
INSERT INTO changes (`Key`, `Table`, `Value`) VALUES (LAST_INSERT_ID(),
'staff', 'ABC');
COMMIT;
SET AUTOCOMMIT=1;

This works fine in my test environment, however what about many users doing
at the same time. Does the LAST_INSERT_ID() get the correct ID for each
user?
  
LAST_INSERT_ID() is connection-specific so the ID will be the one that 
was assigned during this particular PHP page's connection to the database.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Mike Kruckenberg

Andre Matos wrote:

SET AUTOCOMMIT=0;
START TRANSACTION;
INSERT INTO staff (`Name`) VALUES ('ABC');
INSERT INTO changes (`Key`, `Table`, `Value`) VALUES (LAST_INSERT_ID(),
'staff', 'ABC');
COMMIT;
SET AUTOCOMMIT=1;

This works fine in my test environment, however what about many users doing
at the same time. Does the LAST_INSERT_ID() get the correct ID for each
user?

Is there a better way to do this or this is fine? I will be using this with
PHP4.
  
To further clarify (my initial reply didn't give much detail), when an 
auto increment value is created for inserting it is in the scope of the 
current connection, and is not changed by the outcome of the transaction.


If you follow the string of SQL statements against your tables you'll 
see that the ID assigned to the record is not released on a rollback, 
the second insert gets a new auto increment value.


mysql SET AUTOCOMMIT=0;
Query OK, 0 rows affected (0.00 sec)

mysql START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)

mysql INSERT INTO staff (`Name`) VALUES ('ABC');
Query OK, 1 row affected (0.00 sec)

mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES 
(LAST_INSERT_ID(),

   - 'staff', 'ABC');
Query OK, 1 row affected (0.00 sec)

mysql select * from staff;
++--+
| ID | Name |
++--+
|  1 | ABC  |
++--+
1 row in set (0.00 sec)

mysql select * from changes;
++-+---+---+
| ID | Key | Table | Value |
++-+---+---+
|  1 | 1   | staff | ABC   |
++-+---+---+
1 row in set (0.00 sec)

mysql select last_insert_id();
+--+
| last_insert_id() |
+--+
|1 |
+--+
1 row in set (0.00 sec)

mysql rollback;
Query OK, 0 rows affected (0.26 sec)

mysql select * from staff;
Empty set (0.00 sec)

mysql select * from changes;
Empty set (0.00 sec)

mysql INSERT INTO staff (`Name`) VALUES ('ABC');
Query OK, 1 row affected (0.00 sec)

mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES 
(LAST_INSERT_ID(),

   - 'staff', 'ABC');
Query OK, 1 row affected (0.00 sec)

mysql select * from staff;
++--+
| ID | Name |
++--+
|  2 | ABC  |
++--+
1 row in set (0.00 sec)

mysql select * from changes;
++-+---+---+
| ID | Key | Table | Value |
++-+---+---+
|  2 | 2   | staff | ABC   |
++-+---+---+
1 row in set (0.00 sec)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Andre Matos
Thanks Mike. 

I understand the possible gaps that I might have if I use the ROLLBACK.
This is acceptable in my case.

What I really want to avoid is what I am doing now: open one transaction to
insert, or update, or delete certain information and close with the commit.
Then, I get the LAST_INSERT_ID() and open another transaction to write my
audit trail. However, if the first one went through ok but if I got a
problem at the second transaction, I need to delete the inserted or updated
or move back the deleted information. This doesn't work well.


Let's expand my staff and change tables to have this structure to
simulate my problem:

++--++
| ID | Name | Gender |
++--++

++-+---+---+---+
| ID | Key | Table | Field | Value |
++-+---+---+---+

And do this:

SET AUTOCOMMIT=0;
START TRANSACTION;

INSERT INTO staff (`Name`, `M`) VALUES ('ABC');

INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'ID', LAST_INSERT_ID());
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'Name', 'ABC');
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'Gender', 'M');

COMMIT;
SET AUTOCOMMIT=1;


I will get something like this:

mysql select * from staff;
++--++
| ID | Name | Gender |
++--++
|  1 | ABC  | M  |
++--++
1 row in set (0.00 sec)

mysql select * from changes;
++-+---+---+---+
| ID | Key | Table | Field | Value |
++-+---+---+---+
|  1 | 1   | staff | ID| 1 |
++-+---+---+---+
|  2 | 1   | staff | Name  | ABC   |
++-+---+---+---+
|  3 | 2   | staff | Gender| M |
++-+---+---+---+
3 row in set (0.00 sec)

See that I have a problem in the third line at the Key column where I
should have 1 but I got 2 instead. This happened because of
LAST_INSERT_ID() used the ID from the changes table instead of the desired
staff table.

Is there any way to avoid this? What about the mysql_insert_id()?

Andre




On 11/28/06 7:50 PM, Mike Kruckenberg [EMAIL PROTECTED] wrote:

 Andre Matos wrote:
 SET AUTOCOMMIT=0;
 START TRANSACTION;
 INSERT INTO staff (`Name`) VALUES ('ABC');
 INSERT INTO changes (`Key`, `Table`, `Value`) VALUES (LAST_INSERT_ID(),
 'staff', 'ABC');
 COMMIT;
 SET AUTOCOMMIT=1;
 
 This works fine in my test environment, however what about many users doing
 at the same time. Does the LAST_INSERT_ID() get the correct ID for each
 user?
 
 Is there a better way to do this or this is fine? I will be using this with
 PHP4.
   
 To further clarify (my initial reply didn't give much detail), when an
 auto increment value is created for inserting it is in the scope of the
 current connection, and is not changed by the outcome of the transaction.
 
 If you follow the string of SQL statements against your tables you'll
 see that the ID assigned to the record is not released on a rollback,
 the second insert gets a new auto increment value.
 
 mysql SET AUTOCOMMIT=0;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql START TRANSACTION;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql INSERT INTO staff (`Name`) VALUES ('ABC');
 Query OK, 1 row affected (0.00 sec)
 
 mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES
 (LAST_INSERT_ID(),
 - 'staff', 'ABC');
 Query OK, 1 row affected (0.00 sec)
 
 mysql select * from staff;
 ++--+
 | ID | Name |
 ++--+
 |  1 | ABC  |
 ++--+
 1 row in set (0.00 sec)
 
 mysql select * from changes;
 ++-+---+---+
 | ID | Key | Table | Value |
 ++-+---+---+
 |  1 | 1   | staff | ABC   |
 ++-+---+---+
 1 row in set (0.00 sec)
 
 mysql select last_insert_id();
 +--+
 | last_insert_id() |
 +--+
 |1 |
 +--+
 1 row in set (0.00 sec)
 
 mysql rollback;
 Query OK, 0 rows affected (0.26 sec)
 
 mysql select * from staff;
 Empty set (0.00 sec)
 
 mysql select * from changes;
 Empty set (0.00 sec)
 
 mysql INSERT INTO staff (`Name`) VALUES ('ABC');
 Query OK, 1 row affected (0.00 sec)
 
 mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES
 (LAST_INSERT_ID(),
 - 'staff', 'ABC');
 Query OK, 1 row affected (0.00 sec)
 
 mysql select * from staff;
 ++--+
 | ID | Name |
 ++--+
 |  2 | ABC  |
 ++--+
 1 row in set (0.00 sec)
 
 mysql select * from changes;
 ++-+---+---+
 | ID | Key | Table | Value |
 ++-+---+---+
 |  2 | 2   | staff | ABC   |
 ++-+---+---+
 1 row in set (0.00 sec)
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Mike Kruckenberg

Andre Matos wrote:
Thanks Mike. 


I understand the possible gaps that I might have if I use the ROLLBACK.
This is acceptable in my case.

What I really want to avoid is what I am doing now: open one transaction to
insert, or update, or delete certain information and close with the commit.
Then, I get the LAST_INSERT_ID() and open another transaction to write my
audit trail. However, if the first one went through ok but if I got a
problem at the second transaction, I need to delete the inserted or updated
or move back the deleted information. This doesn't work well.


Let's expand my staff and change tables to have this structure to
simulate my problem:

++--++
| ID | Name | Gender |
++--++

++-+---+---+---+
| ID | Key | Table | Field | Value |
++-+---+---+---+

And do this:

SET AUTOCOMMIT=0;
START TRANSACTION;

INSERT INTO staff (`Name`, `M`) VALUES ('ABC');

INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'ID', LAST_INSERT_ID());
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'Name', 'ABC');
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(LAST_INSERT_ID(), 'Staff', 'Gender', 'M');

COMMIT;
SET AUTOCOMMIT=1;


I will get something like this:

mysql select * from staff;
++--++
| ID | Name | Gender |
++--++
|  1 | ABC  | M  |
++--++
1 row in set (0.00 sec)

mysql select * from changes;
++-+---+---+---+
| ID | Key | Table | Field | Value |
++-+---+---+---+
|  1 | 1   | staff | ID| 1 |
++-+---+---+---+
|  2 | 1   | staff | Name  | ABC   |
++-+---+---+---+
|  3 | 2   | staff | Gender| M |
++-+---+---+---+
3 row in set (0.00 sec)

See that I have a problem in the third line at the Key column where I
should have 1 but I got 2 instead. This happened because of
LAST_INSERT_ID() used the ID from the changes table instead of the desired
staff table.

Is there any way to avoid this? What about the mysql_insert_id()?
  
I see. In this case you could make it two operations and use the 
mysql_insert_id() to capture the id from the first insert, setting a 
variable to that in PHP and using that variable to ensure the same number.


What I would consider is setting a database variable inside the 
transaction to store the id - the @ signifies it's a session variable 
that is specific to this connection:


SET AUTOCOMMIT=0;
START TRANSACTION;

INSERT INTO staff (`Name`, `Gender`) VALUES ('ABC','M');

SET @staff_id = LAST_INSERT_ID();

INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(@staff_id, 'Staff', 'ID', @staff_id);
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(@staff_id, 'Staff', 'Name', 'ABC');
INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
(@staff_id, 'Staff', 'Gender', 'M');

COMMIT;
SET AUTOCOMMIT=1;


So you store the value after the first insert and then reuse. You can 
see it in the following string of SQL commands to demonstrate:


mysql INSERT INTO staff (`Name`, `Gender`) VALUES ('ABC','M');
Query OK, 1 row affected (0.01 sec)

mysql SET @staff_id = LAST_INSERT_ID();
Query OK, 0 rows affected (0.01 sec)

mysql SELECT @staff_id;
+---+
| @staff_id |
+---+
| 3 |
+---+
1 row in set (0.00 sec)

mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES 
(LAST_INSERT_ID(), 'staff', 'ABC');

Query OK, 1 row affected (0.01 sec)

mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES 
(LAST_INSERT_ID(), 'staff', 'ABC');

Query OK, 1 row affected (0.00 sec)

mysql select LAST_INSERT_ID();
+--+
| LAST_INSERT_ID() |
+--+
|5 |
+--+
1 row in set (0.00 sec)

mysql SELECT @staff_id;
+---+
| @staff_id |
+---+
| 3 |
+---+
1 row in set (0.00 sec)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Transaction and LAST_INSERT_ID()

2006-11-28 Thread Andre Matos
Thanks for all your help Mike.

Problem solved. I divided to process in two parts: one write the
insert/update/delete and then write the changes in the audit trail. All this
inside one transaction. If the first part fails, ROLLBACK. If the second
part fails, ROLLBACK, otherwise, if both were done ok, then COMMIT.

I just wanted to have all this in only one part, but that's fine. It's
working fine.

Final question: Can I create an audit trail using TRIGGER in MySQL 5? This
would be the best because any changes in the database (insert/update/delete)
will start the trigger which will be responsible for writing the audit
trail.

Thanks again!!!

Andre


On 11/28/06 9:22 PM, Mike Kruckenberg [EMAIL PROTECTED] wrote:

 Andre Matos wrote:
 Thanks Mike. 
 
 I understand the possible gaps that I might have if I use the ROLLBACK.
 This is acceptable in my case.
 
 What I really want to avoid is what I am doing now: open one transaction to
 insert, or update, or delete certain information and close with the commit.
 Then, I get the LAST_INSERT_ID() and open another transaction to write my
 audit trail. However, if the first one went through ok but if I got a
 problem at the second transaction, I need to delete the inserted or updated
 or move back the deleted information. This doesn't work well.
 
 
 Let's expand my staff and change tables to have this structure to
 simulate my problem:
 
 ++--++
 | ID | Name | Gender |
 ++--++
 
 ++-+---+---+---+
 | ID | Key | Table | Field | Value |
 ++-+---+---+---+
 
 And do this:
 
 SET AUTOCOMMIT=0;
 START TRANSACTION;
 
 INSERT INTO staff (`Name`, `M`) VALUES ('ABC');
 
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (LAST_INSERT_ID(), 'Staff', 'ID', LAST_INSERT_ID());
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (LAST_INSERT_ID(), 'Staff', 'Name', 'ABC');
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (LAST_INSERT_ID(), 'Staff', 'Gender', 'M');
 
 COMMIT;
 SET AUTOCOMMIT=1;
 
 
 I will get something like this:
 
 mysql select * from staff;
 ++--++
 | ID | Name | Gender |
 ++--++
 |  1 | ABC  | M  |
 ++--++
 1 row in set (0.00 sec)
 
 mysql select * from changes;
 ++-+---+---+---+
 | ID | Key | Table | Field | Value |
 ++-+---+---+---+
 |  1 | 1   | staff | ID| 1 |
 ++-+---+---+---+
 |  2 | 1   | staff | Name  | ABC   |
 ++-+---+---+---+
 |  3 | 2   | staff | Gender| M |
 ++-+---+---+---+
 3 row in set (0.00 sec)
 
 See that I have a problem in the third line at the Key column where I
 should have 1 but I got 2 instead. This happened because of
 LAST_INSERT_ID() used the ID from the changes table instead of the desired
 staff table.
 
 Is there any way to avoid this? What about the mysql_insert_id()?
   
 I see. In this case you could make it two operations and use the
 mysql_insert_id() to capture the id from the first insert, setting a
 variable to that in PHP and using that variable to ensure the same number.
 
 What I would consider is setting a database variable inside the
 transaction to store the id - the @ signifies it's a session variable
 that is specific to this connection:
 
 SET AUTOCOMMIT=0;
 START TRANSACTION;
 
 INSERT INTO staff (`Name`, `Gender`) VALUES ('ABC','M');
 
 SET @staff_id = LAST_INSERT_ID();
 
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (@staff_id, 'Staff', 'ID', @staff_id);
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (@staff_id, 'Staff', 'Name', 'ABC');
 INSERT INTO changes (`Key`, `Table`, `Field`, `Value`) VALUES
 (@staff_id, 'Staff', 'Gender', 'M');
 
 COMMIT;
 SET AUTOCOMMIT=1;
 
 
 So you store the value after the first insert and then reuse. You can
 see it in the following string of SQL commands to demonstrate:
 
 mysql INSERT INTO staff (`Name`, `Gender`) VALUES ('ABC','M');
 Query OK, 1 row affected (0.01 sec)
 
 mysql SET @staff_id = LAST_INSERT_ID();
 Query OK, 0 rows affected (0.01 sec)
 
 mysql SELECT @staff_id;
 +---+
 | @staff_id |
 +---+
 | 3 |
 +---+
 1 row in set (0.00 sec)
 
 mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES
 (LAST_INSERT_ID(), 'staff', 'ABC');
 Query OK, 1 row affected (0.01 sec)
 
 mysql INSERT INTO changes (`Key`, `Table`, `Value`) VALUES
 (LAST_INSERT_ID(), 'staff', 'ABC');
 Query OK, 1 row affected (0.00 sec)
 
 mysql select LAST_INSERT_ID();
 +--+
 | LAST_INSERT_ID() |
 +--+
 |5 |
 +--+
 1 row in set (0.00 sec)
 
 mysql SELECT @staff_id;
 +---+
 | @staff_id |
 +---+
 | 3 |
 +---+
 1 row in set (0.00 sec)



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



transaction in mysql 5

2006-10-31 Thread Ahmad Al-Twaijiry

Hi everyone

I have a PHP script that will run every minute and do a lot of SELECT
and UPDATE statments
All my tables are InnoDB and I'm using PHP 5 and POD class (
http://php.net/pod ) to connect to mysql,  in my script I start the
transaction (using method beginTransaction() ) in the beginning of the
script and commit in  the end of the script ( also rollback if there
is any problem)

As I said , this script run every one minute, my question is :

what will happen if I run the script more than one in the same time
(like if I run it as a thread) ?

is it possible that one of my script process will select the same data
that the other process is selecting or updating ? or if I run a second
process of my script it will not be able to read/write anything until
the first script commit to the database or rollback ?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: transaction in mysql 5

2006-10-31 Thread George-Cristian Bîrzan
On Tue, Oct 31, 2006 at 08:58:04PM +0300, Ahmad Al-Twaijiry wrote:
 is it possible that one of my script process will select the same data
 that the other process is selecting or updating ? or if I run a second
 process of my script it will not be able to read/write anything until
 the first script commit to the database or rollback ?

http://dev.mysql.com/doc/refman/5.0/en/innodb-transaction-isolation.html

-- 
George-Cristian Bîrzan

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



dose mysqldump --single-transaction lock table?

2006-09-13 Thread Leo

Hi,all.I want to use mysqldump to backup a innodb table,and add the option 
--single-transaction,dose it lock all the table?thanks.

--
Leo
2006-09-13



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   >