RE: Possible bug in mysqldump?

2008-08-05 Thread Rolando Edwards
When you use --master-data=1, it executes the CHANGE MASTER command first 
before adding data.

Do the following to verify this:

Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p...  
DataDump1.sql
Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p...  
DataDump2.sql

Run 'head -30 DataDump1.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
Therefore, it will execute.

Run 'head -30 DataDump2.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
However, the command is commented Out !!!
Therefore, it will not execute.

After loading DataDump2.sql, you can then use the replication coordinates (log 
file name and log position) in the Commented Out CHANGE MASTER Command
After the data are loaded.

In theory, it is a paradigm bug because the CHANGE MASTER command when using 
--master-data=1 should appear on the bottom of the mysqldump and not at the 
top. Yet, it is at the top and executes immediately and then tries to load your 
data and read from the master's binary logs at the same time, guaranteeing 
duplicate key collision.

This is why importing mysqldump straight to mysql via a pipe produces the error 
you are experiencing.

Try this:

1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

2) In mysql session 1, run SHOW MASTER STATUS.

3) Record the log file and position from mysql session 1.

4) In mysql seesion 2, run 'STOP SLAVE;'

5) Run 'mysqldump --single-transaction mysqldump --single-transaction 
--master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root -pmypass 
-h slaveHost dbName'. Let it run to completion.

Notice I did not use --master-data in the mysqldump

5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file from 
SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER STATUS';

6) In mysql session 2,run 'START SLAVE'.

7) In mysql session 1, run 'UNLOCK TABLES'

Give it a try !!!

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 05, 2008 3:02 AM
To: mysql@lists.mysql.com
Subject: Possible bug in mysqldump?

Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on the slave:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName masterDB.sql ;

mysql -u root -pmypass -h slaveHost dbName masterDB.sql

Then running the following in the mysql client on the slave:

start slave;

At this point the slave comes up perfectly and is in sync with the master.

However, if I do exactly the same thing, but import the data using a pipe
command:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

When i start the slave I get a duplicate key error. In other words, the
slave is trying to execute entries in the masters log that have already been
run.

I can't figure out why this is a problem and this has forced me to store
data on disk as a file as an intermediate step when setting up slaves.

The only difference between the two methods is that in the first case the
data is stored on disk and then imported via the client and in the second
case it's piped directly to the client. In both cases the data that
mysqldump produces is the same. Both include the CHANGE MASTER command that
sets the log file and position.

Is this a bug in mysqldump, or am I missing something?

Thanks in advance,

Mark.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks for the reply Rolando.

In both the examples I provided (pipe and text file) the CHANGE MASTER
command appears at the top of the data import and is uncommented and
therefore executes before the data is imported. I don't think this is a
problem because the slave only starts replicating from the master once I run
the start slave command. That command is only run after all data is
imported.

Unless the slave does some kind of processing before I run start slave I
don't see this is the explanation.

Thanks again - and please let me know your thoughts on this because I could
be wrong.

Mark.

On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL PROTECTED]wrote:

 When you use --master-data=1, it executes the CHANGE MASTER command first
 before adding data.

 Do the following to verify this:

 Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... 
 DataDump1.sql
 Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... 
 DataDump2.sql

 Run 'head -30 DataDump1.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 Therefore, it will execute.

 Run 'head -30 DataDump2.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 However, the command is commented Out !!!
 Therefore, it will not execute.

 After loading DataDump2.sql, you can then use the replication coordinates
 (log file name and log position) in the Commented Out CHANGE MASTER Command
 After the data are loaded.

 In theory, it is a paradigm bug because the CHANGE MASTER command when
 using --master-data=1 should appear on the bottom of the mysqldump and not
 at the top. Yet, it is at the top and executes immediately and then tries to
 load your data and read from the master's binary logs at the same time,
 guaranteeing duplicate key collision.

 This is why importing mysqldump straight to mysql via a pipe produces the
 error you are experiencing.

 Try this:

 1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

 2) In mysql session 1, run SHOW MASTER STATUS.

 3) Record the log file and position from mysql session 1.

 4) In mysql seesion 2, run 'STOP SLAVE;'

 5) Run 'mysqldump --single-transaction mysqldump --single-transaction
 --master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root
 -pmypass -h slaveHost dbName'. Let it run to completion.

 Notice I did not use --master-data in the mysqldump

 5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file
 from SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER
 STATUS';

 6) In mysql session 2,run 'START SLAVE'.

 7) In mysql session 1, run 'UNLOCK TABLES'

 Give it a try !!!

 -Original Message-
 From: Mark Maunder [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 05, 2008 3:02 AM
 To: mysql@lists.mysql.com
 Subject: Possible bug in mysqldump?

 Hi all,

 I'm busy setting up replication and have encountered what looks like a bug
 in mysqldump. The following commands work perfectly:

 Running the following commands in the mysql client on the slave:
 stop slave;
 reset slave;
 create database dbName;
 CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
 MASTER_PASSWORD='mypass';

 Then running the following on the command line on the slave:

 mysqldump --single-transaction --master-data=1 -u root -pmypass -h
 masterHost dbName masterDB.sql ;

 mysql -u root -pmypass -h slaveHost dbName masterDB.sql

 Then running the following in the mysql client on the slave:

 start slave;

 At this point the slave comes up perfectly and is in sync with the master.

 However, if I do exactly the same thing, but import the data using a pipe
 command:

 mysqldump --single-transaction --master-data=1 -u root -pmypass -h
 masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

 When i start the slave I get a duplicate key error. In other words, the
 slave is trying to execute entries in the masters log that have already
 been
 run.

 I can't figure out why this is a problem and this has forced me to store
 data on disk as a file as an intermediate step when setting up slaves.

 The only difference between the two methods is that in the first case the
 data is stored on disk and then imported via the client and in the second
 case it's piped directly to the client. In both cases the data that
 mysqldump produces is the same. Both include the CHANGE MASTER command that
 sets the log file and position.

 Is this a bug in mysqldump, or am I missing something?

 Thanks in advance,

 Mark.




-- 
Mark Maunder [EMAIL PROTECTED]
http://markmaunder.com/
+1-206-6978723


RE: Possible bug in mysqldump?

2008-08-05 Thread Rolando Edwards
This is an excerpt from 
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_master-data
The --master-data option automatically turns off --lock-tables. It also turns 
on --lock-all-tables, unless --single-transaction also is specified, in which 
case, a global read lock is acquired only for a short time at the beginning of 
the dump (see the description for --single-transaction). In all cases, any 
action on logs happens at the exact moment of the dump. (Bold Italics mine)
According to preceding statement, the option --single-transaction WILL NOT 
HOLD OFF ANY NEW TRANSACTIONS FOR THE DURATION OF THE mysqldump. Consequently, 
somewhere in the middle of the dump process, table locks are released 
prematurely by design.

This is why I suggested locking all tables with FLUSH TABLES WITH READ LOCK on 
the master so no new transactions would sneak in during the pipe-fed mysql load 
from mysqldump.

Locking the master with FLUSH TABLES WITH READ LOCK should be done even if you 
are dumping to a text file in order to have a perfect snapshot of the data.

Additionally, the option --single-transaction WILL NOT PROTECT MyISAM tables 
from live changes being written to the dump file since you cannot run ACID 
compliant transactions against MyISAM, only InnoDB.

Doing FLUSH TABLES WITH READ LOCK on the master prior to the mysqldump will 
guarantee that no transactions, regardless of whether it is for MyISAM or 
InnoDB, will come through during a mysqldump.


From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 05, 2008 12:17 PM
To: Rolando Edwards
Cc: mysql@lists.mysql.com
Subject: Re: Possible bug in mysqldump?

Thanks for the reply Rolando.

In both the examples I provided (pipe and text file) the CHANGE MASTER command 
appears at the top of the data import and is uncommented and therefore executes 
before the data is imported. I don't think this is a problem because the slave 
only starts replicating from the master once I run the start slave command. 
That command is only run after all data is imported.

Unless the slave does some kind of processing before I run start slave I 
don't see this is the explanation.

Thanks again - and please let me know your thoughts on this because I could be 
wrong.

Mark.
On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL 
PROTECTED]mailto:[EMAIL PROTECTED] wrote:
When you use --master-data=1, it executes the CHANGE MASTER command first 
before adding data.

Do the following to verify this:

Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p...  
DataDump1.sql
Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p...  
DataDump2.sql

Run 'head -30 DataDump1.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
Therefore, it will execute.

Run 'head -30 DataDump2.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
However, the command is commented Out !!!
Therefore, it will not execute.

After loading DataDump2.sql, you can then use the replication coordinates (log 
file name and log position) in the Commented Out CHANGE MASTER Command
After the data are loaded.

In theory, it is a paradigm bug because the CHANGE MASTER command when using 
--master-data=1 should appear on the bottom of the mysqldump and not at the 
top. Yet, it is at the top and executes immediately and then tries to load your 
data and read from the master's binary logs at the same time, guaranteeing 
duplicate key collision.

This is why importing mysqldump straight to mysql via a pipe produces the error 
you are experiencing.

Try this:

1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

2) In mysql session 1, run SHOW MASTER STATUS.

3) Record the log file and position from mysql session 1.

4) In mysql seesion 2, run 'STOP SLAVE;'

5) Run 'mysqldump --single-transaction mysqldump --single-transaction 
--master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root -pmypass 
-h slaveHost dbName'. Let it run to completion.

Notice I did not use --master-data in the mysqldump

5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file from 
SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER STATUS';

6) In mysql session 2,run 'START SLAVE'.

7) In mysql session 1, run 'UNLOCK TABLES'

Give it a try !!!

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 05, 2008 3:02 AM
To: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Possible bug in mysqldump?

Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line

Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks Rolando,

I'm using InnoDB tables. According to the docs, the single-transaction
option:

 Creates a consistent snapshot by dumping all tables in
a
  single transaction. Works ONLY for tables stored in
  storage engines which support multiversioning
(currently
  only InnoDB does); the dump is NOT guaranteed to be
  consistent for other storage engines. Option
  automatically turns off --lock-tables.

That seems to contradict what you're saying. I think they key is that InnoDB
supports multiversioning and that single-transaction creates a snapshot
version of the db by briefly locking all tables. That has the same effect
as locking MyISAM tables for the duration of the dump - as I understand it.
Can anyone confirm this? So this still doesn't explain the different
behaviour between pipe and redirect that I'm seeing.

Regards,

Mark.


On Tue, Aug 5, 2008 at 11:55 AM, Rolando Edwards [EMAIL PROTECTED]wrote:

  This is an excerpt from
 http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_master-data

 The --master-data option automatically turns off --lock-tables. It also
 turns on --lock-all-tables, unless *--single-transaction* also is
 specified, in which case, *a global read lock is acquired only for a short
 time at the beginning of the dump* (see the description for
 --single-transaction). *In all cases, any action on logs happens at the
 exact moment of the dump*. (Bold Italics mine)

 According to preceding statement, the option *--single-transaction* WILL
 NOT HOLD OFF ANY NEW TRANSACTIONS FOR THE DURATION OF THE mysqldump.
 Consequently, somewhere in the middle of the dump process, table locks are
 released prematurely by design.



 This is why I suggested locking all tables with FLUSH TABLES WITH READ LOCK
 on the master so no new transactions would sneak in during the pipe-fed
 mysql load from mysqldump.



 Locking the master with FLUSH TABLES WITH READ LOCK should be done even if
 you are dumping to a text file in order to have a perfect snapshot of the
 data.



 Additionally, the option *--single-transaction* WILL NOT PROTECT MyISAM
 tables from live changes being written to the dump file since you cannot run
 ACID compliant transactions against MyISAM, only InnoDB.



 Doing FLUSH TABLES WITH READ LOCK on the master prior to the mysqldump will
 guarantee that no transactions, regardless of whether it is for MyISAM or
 InnoDB, will come through during a mysqldump.


  --

 *From:* Mark Maunder [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 05, 2008 12:17 PM
 *To:* Rolando Edwards
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: Possible bug in mysqldump?



 Thanks for the reply Rolando.

 In both the examples I provided (pipe and text file) the CHANGE MASTER
 command appears at the top of the data import and is uncommented and
 therefore executes before the data is imported. I don't think this is a
 problem because the slave only starts replicating from the master once I run
 the start slave command. That command is only run after all data is
 imported.

 Unless the slave does some kind of processing before I run start slave I
 don't see this is the explanation.

 Thanks again - and please let me know your thoughts on this because I could
 be wrong.

 Mark.

 On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL PROTECTED]
 wrote:

 When you use --master-data=1, it executes the CHANGE MASTER command first
 before adding data.

 Do the following to verify this:

 Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... 
 DataDump1.sql
 Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... 
 DataDump2.sql

 Run 'head -30 DataDump1.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 Therefore, it will execute.

 Run 'head -30 DataDump2.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 However, the command is commented Out !!!
 Therefore, it will not execute.

 After loading DataDump2.sql, you can then use the replication coordinates
 (log file name and log position) in the Commented Out CHANGE MASTER Command
 After the data are loaded.

 In theory, it is a paradigm bug because the CHANGE MASTER command when
 using --master-data=1 should appear on the bottom of the mysqldump and not
 at the top. Yet, it is at the top and executes immediately and then tries to
 load your data and read from the master's binary logs at the same time,
 guaranteeing duplicate key collision.

 This is why importing mysqldump straight to mysql via a pipe produces the
 error you are experiencing.

 Try this:

 1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

 2) In mysql session 1, run SHOW MASTER STATUS.

 3) Record the log file and position from mysql session 1.

 4) In mysql seesion 2, run 'STOP SLAVE;'

 5) Run 'mysqldump