Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on the slave:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName masterDB.sql ;

mysql -u root -pmypass -h slaveHost dbName masterDB.sql

Then running the following in the mysql client on the slave:

start slave;

At this point the slave comes up perfectly and is in sync with the master.

However, if I do exactly the same thing, but import the data using a pipe
command:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

When i start the slave I get a duplicate key error. In other words, the
slave is trying to execute entries in the masters log that have already been
run.

I can't figure out why this is a problem and this has forced me to store
data on disk as a file as an intermediate step when setting up slaves.

The only difference between the two methods is that in the first case the
data is stored on disk and then imported via the client and in the second
case it's piped directly to the client. In both cases the data that
mysqldump produces is the same. Both include the CHANGE MASTER command that
sets the log file and position.

Is this a bug in mysqldump, or am I missing something?

Thanks in advance,

Mark.


Too many connections

2008-08-05 Thread Krishna Chandra Prajapati
Hi All,

I am trying to connect to mysql server. Buts, Its giving  too many
connections.  How to increase the max_connection on mysql server.

When i am giving mysql -u root -ppassword
Still, its giving too many connections.

How to solve this problem.

--Thanks
Krishna Chandra Prajapati


Re: Too many connections

2008-08-05 Thread Krishna Chandra Prajapati
Thanks  a lot.

Is there any way to increase the maximum no of connections without
restarting mysql server.

On Tue, Aug 5, 2008 at 2:39 PM, Vladislav Vorobiev [EMAIL PROTECTED]
 wrote:

 2008/8/5 Krishna Chandra Prajapati [EMAIL PROTECTED]:
  Hi All,
 
  I am trying to connect to mysql server. Buts, Its giving  too many
  connections.  How to increase the max_connection on mysql server.
 
  When i am giving mysql -u root -ppassword
  Still, its giving too many connections.
 
  How to solve this problem.
 

 set-variable=max_connections=your value

 in my.cnf and restart mysql server. I think this is a problem.


 Vladislav




-- 
Krishna Chandra Prajapati


Re: Too many connections

2008-08-05 Thread Ananda Kumar
you can do this

set global max_connections=2500;



On 8/5/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Thanks  a lot.

 Is there any way to increase the maximum no of connections without
 restarting mysql server.

 On Tue, Aug 5, 2008 at 2:39 PM, Vladislav Vorobiev mymir.org@
 googlemail.com
  wrote:

  2008/8/5 Krishna Chandra Prajapati [EMAIL PROTECTED]:
   Hi All,
  
   I am trying to connect to mysql server. Buts, Its giving  too many
   connections.  How to increase the max_connection on mysql server.
  
   When i am giving mysql -u root -ppassword
   Still, its giving too many connections.
  
   How to solve this problem.
  
 
  set-variable=max_connections=your value
 
  in my.cnf and restart mysql server. I think this is a problem.
 
 
  Vladislav
 



 --
 Krishna Chandra Prajapati



Re: Too many connections

2008-08-05 Thread Vladislav Vorobiev
2008/8/5 Krishna Chandra Prajapati [EMAIL PROTECTED]:
 Hi All,

 I am trying to connect to mysql server. Buts, Its giving  too many
 connections.  How to increase the max_connection on mysql server.

 When i am giving mysql -u root -ppassword
 Still, its giving too many connections.

 How to solve this problem.


set-variable=max_connections=your value

in my.cnf and restart mysql server. I think this is a problem.


Vladislav

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problems with sessions from Cicso CSM

2008-08-05 Thread Günter Radakovits
Hi there,

 

We've installed a MySQL NDB Cluster which is running fine.

There are two Cisco CSM loadbalacer which shall distribute the workload over 
the query nodes.

The loadbalancers also are doing TCP probes to the MySQL Ports (3306) on both 
querynodes.

 

Each such probe generates a session in the querynode. 

The probes are done every 20 seconds so that we get 180 new sessions per hour.

After some time there are no new connections possible to MySQL - ERROR 1040 
(0): Too many connections

 

Any idea how to fix this problem?

Maybe a parameter to disconnect idle sessions after some time or similar?

Or another way to probe MySQL?

 

Thanks

Günter



RE: Possible bug in mysqldump?

2008-08-05 Thread Rolando Edwards
When you use --master-data=1, it executes the CHANGE MASTER command first 
before adding data.

Do the following to verify this:

Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p...  
DataDump1.sql
Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p...  
DataDump2.sql

Run 'head -30 DataDump1.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
Therefore, it will execute.

Run 'head -30 DataDump2.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
However, the command is commented Out !!!
Therefore, it will not execute.

After loading DataDump2.sql, you can then use the replication coordinates (log 
file name and log position) in the Commented Out CHANGE MASTER Command
After the data are loaded.

In theory, it is a paradigm bug because the CHANGE MASTER command when using 
--master-data=1 should appear on the bottom of the mysqldump and not at the 
top. Yet, it is at the top and executes immediately and then tries to load your 
data and read from the master's binary logs at the same time, guaranteeing 
duplicate key collision.

This is why importing mysqldump straight to mysql via a pipe produces the error 
you are experiencing.

Try this:

1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

2) In mysql session 1, run SHOW MASTER STATUS.

3) Record the log file and position from mysql session 1.

4) In mysql seesion 2, run 'STOP SLAVE;'

5) Run 'mysqldump --single-transaction mysqldump --single-transaction 
--master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root -pmypass 
-h slaveHost dbName'. Let it run to completion.

Notice I did not use --master-data in the mysqldump

5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file from 
SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER STATUS';

6) In mysql session 2,run 'START SLAVE'.

7) In mysql session 1, run 'UNLOCK TABLES'

Give it a try !!!

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 05, 2008 3:02 AM
To: mysql@lists.mysql.com
Subject: Possible bug in mysqldump?

Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on the slave:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName masterDB.sql ;

mysql -u root -pmypass -h slaveHost dbName masterDB.sql

Then running the following in the mysql client on the slave:

start slave;

At this point the slave comes up perfectly and is in sync with the master.

However, if I do exactly the same thing, but import the data using a pipe
command:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

When i start the slave I get a duplicate key error. In other words, the
slave is trying to execute entries in the masters log that have already been
run.

I can't figure out why this is a problem and this has forced me to store
data on disk as a file as an intermediate step when setting up slaves.

The only difference between the two methods is that in the first case the
data is stored on disk and then imported via the client and in the second
case it's piped directly to the client. In both cases the data that
mysqldump produces is the same. Both include the CHANGE MASTER command that
sets the log file and position.

Is this a bug in mysqldump, or am I missing something?

Thanks in advance,

Mark.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks for the reply Rolando.

In both the examples I provided (pipe and text file) the CHANGE MASTER
command appears at the top of the data import and is uncommented and
therefore executes before the data is imported. I don't think this is a
problem because the slave only starts replicating from the master once I run
the start slave command. That command is only run after all data is
imported.

Unless the slave does some kind of processing before I run start slave I
don't see this is the explanation.

Thanks again - and please let me know your thoughts on this because I could
be wrong.

Mark.

On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL PROTECTED]wrote:

 When you use --master-data=1, it executes the CHANGE MASTER command first
 before adding data.

 Do the following to verify this:

 Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... 
 DataDump1.sql
 Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... 
 DataDump2.sql

 Run 'head -30 DataDump1.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 Therefore, it will execute.

 Run 'head -30 DataDump2.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 However, the command is commented Out !!!
 Therefore, it will not execute.

 After loading DataDump2.sql, you can then use the replication coordinates
 (log file name and log position) in the Commented Out CHANGE MASTER Command
 After the data are loaded.

 In theory, it is a paradigm bug because the CHANGE MASTER command when
 using --master-data=1 should appear on the bottom of the mysqldump and not
 at the top. Yet, it is at the top and executes immediately and then tries to
 load your data and read from the master's binary logs at the same time,
 guaranteeing duplicate key collision.

 This is why importing mysqldump straight to mysql via a pipe produces the
 error you are experiencing.

 Try this:

 1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

 2) In mysql session 1, run SHOW MASTER STATUS.

 3) Record the log file and position from mysql session 1.

 4) In mysql seesion 2, run 'STOP SLAVE;'

 5) Run 'mysqldump --single-transaction mysqldump --single-transaction
 --master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root
 -pmypass -h slaveHost dbName'. Let it run to completion.

 Notice I did not use --master-data in the mysqldump

 5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file
 from SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER
 STATUS';

 6) In mysql session 2,run 'START SLAVE'.

 7) In mysql session 1, run 'UNLOCK TABLES'

 Give it a try !!!

 -Original Message-
 From: Mark Maunder [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 05, 2008 3:02 AM
 To: mysql@lists.mysql.com
 Subject: Possible bug in mysqldump?

 Hi all,

 I'm busy setting up replication and have encountered what looks like a bug
 in mysqldump. The following commands work perfectly:

 Running the following commands in the mysql client on the slave:
 stop slave;
 reset slave;
 create database dbName;
 CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
 MASTER_PASSWORD='mypass';

 Then running the following on the command line on the slave:

 mysqldump --single-transaction --master-data=1 -u root -pmypass -h
 masterHost dbName masterDB.sql ;

 mysql -u root -pmypass -h slaveHost dbName masterDB.sql

 Then running the following in the mysql client on the slave:

 start slave;

 At this point the slave comes up perfectly and is in sync with the master.

 However, if I do exactly the same thing, but import the data using a pipe
 command:

 mysqldump --single-transaction --master-data=1 -u root -pmypass -h
 masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

 When i start the slave I get a duplicate key error. In other words, the
 slave is trying to execute entries in the masters log that have already
 been
 run.

 I can't figure out why this is a problem and this has forced me to store
 data on disk as a file as an intermediate step when setting up slaves.

 The only difference between the two methods is that in the first case the
 data is stored on disk and then imported via the client and in the second
 case it's piped directly to the client. In both cases the data that
 mysqldump produces is the same. Both include the CHANGE MASTER command that
 sets the log file and position.

 Is this a bug in mysqldump, or am I missing something?

 Thanks in advance,

 Mark.




-- 
Mark Maunder [EMAIL PROTECTED]
http://markmaunder.com/
+1-206-6978723


RE: Possible bug in mysqldump?

2008-08-05 Thread Rolando Edwards
This is an excerpt from 
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_master-data
The --master-data option automatically turns off --lock-tables. It also turns 
on --lock-all-tables, unless --single-transaction also is specified, in which 
case, a global read lock is acquired only for a short time at the beginning of 
the dump (see the description for --single-transaction). In all cases, any 
action on logs happens at the exact moment of the dump. (Bold Italics mine)
According to preceding statement, the option --single-transaction WILL NOT 
HOLD OFF ANY NEW TRANSACTIONS FOR THE DURATION OF THE mysqldump. Consequently, 
somewhere in the middle of the dump process, table locks are released 
prematurely by design.

This is why I suggested locking all tables with FLUSH TABLES WITH READ LOCK on 
the master so no new transactions would sneak in during the pipe-fed mysql load 
from mysqldump.

Locking the master with FLUSH TABLES WITH READ LOCK should be done even if you 
are dumping to a text file in order to have a perfect snapshot of the data.

Additionally, the option --single-transaction WILL NOT PROTECT MyISAM tables 
from live changes being written to the dump file since you cannot run ACID 
compliant transactions against MyISAM, only InnoDB.

Doing FLUSH TABLES WITH READ LOCK on the master prior to the mysqldump will 
guarantee that no transactions, regardless of whether it is for MyISAM or 
InnoDB, will come through during a mysqldump.


From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 05, 2008 12:17 PM
To: Rolando Edwards
Cc: mysql@lists.mysql.com
Subject: Re: Possible bug in mysqldump?

Thanks for the reply Rolando.

In both the examples I provided (pipe and text file) the CHANGE MASTER command 
appears at the top of the data import and is uncommented and therefore executes 
before the data is imported. I don't think this is a problem because the slave 
only starts replicating from the master once I run the start slave command. 
That command is only run after all data is imported.

Unless the slave does some kind of processing before I run start slave I 
don't see this is the explanation.

Thanks again - and please let me know your thoughts on this because I could be 
wrong.

Mark.
On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL 
PROTECTED]mailto:[EMAIL PROTECTED] wrote:
When you use --master-data=1, it executes the CHANGE MASTER command first 
before adding data.

Do the following to verify this:

Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p...  
DataDump1.sql
Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p...  
DataDump2.sql

Run 'head -30 DataDump1.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
Therefore, it will execute.

Run 'head -30 DataDump2.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
However, the command is commented Out !!!
Therefore, it will not execute.

After loading DataDump2.sql, you can then use the replication coordinates (log 
file name and log position) in the Commented Out CHANGE MASTER Command
After the data are loaded.

In theory, it is a paradigm bug because the CHANGE MASTER command when using 
--master-data=1 should appear on the bottom of the mysqldump and not at the 
top. Yet, it is at the top and executes immediately and then tries to load your 
data and read from the master's binary logs at the same time, guaranteeing 
duplicate key collision.

This is why importing mysqldump straight to mysql via a pipe produces the error 
you are experiencing.

Try this:

1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

2) In mysql session 1, run SHOW MASTER STATUS.

3) Record the log file and position from mysql session 1.

4) In mysql seesion 2, run 'STOP SLAVE;'

5) Run 'mysqldump --single-transaction mysqldump --single-transaction 
--master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root -pmypass 
-h slaveHost dbName'. Let it run to completion.

Notice I did not use --master-data in the mysqldump

5) In mysql session 2, run CHANGE MASTER TO MASTER_LOG_FILE='log file from 
SHOW MASTER STATUS,MASTER_LOG_POS='log position from SHOW MASTER STATUS';

6) In mysql session 2,run 'START SLAVE'.

7) In mysql session 1, run 'UNLOCK TABLES'

Give it a try !!!

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 05, 2008 3:02 AM
To: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Possible bug in mysqldump?

Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on 

Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks Rolando,

I'm using InnoDB tables. According to the docs, the single-transaction
option:

 Creates a consistent snapshot by dumping all tables in
a
  single transaction. Works ONLY for tables stored in
  storage engines which support multiversioning
(currently
  only InnoDB does); the dump is NOT guaranteed to be
  consistent for other storage engines. Option
  automatically turns off --lock-tables.

That seems to contradict what you're saying. I think they key is that InnoDB
supports multiversioning and that single-transaction creates a snapshot
version of the db by briefly locking all tables. That has the same effect
as locking MyISAM tables for the duration of the dump - as I understand it.
Can anyone confirm this? So this still doesn't explain the different
behaviour between pipe and redirect that I'm seeing.

Regards,

Mark.


On Tue, Aug 5, 2008 at 11:55 AM, Rolando Edwards [EMAIL PROTECTED]wrote:

  This is an excerpt from
 http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_master-data

 The --master-data option automatically turns off --lock-tables. It also
 turns on --lock-all-tables, unless *--single-transaction* also is
 specified, in which case, *a global read lock is acquired only for a short
 time at the beginning of the dump* (see the description for
 --single-transaction). *In all cases, any action on logs happens at the
 exact moment of the dump*. (Bold Italics mine)

 According to preceding statement, the option *--single-transaction* WILL
 NOT HOLD OFF ANY NEW TRANSACTIONS FOR THE DURATION OF THE mysqldump.
 Consequently, somewhere in the middle of the dump process, table locks are
 released prematurely by design.



 This is why I suggested locking all tables with FLUSH TABLES WITH READ LOCK
 on the master so no new transactions would sneak in during the pipe-fed
 mysql load from mysqldump.



 Locking the master with FLUSH TABLES WITH READ LOCK should be done even if
 you are dumping to a text file in order to have a perfect snapshot of the
 data.



 Additionally, the option *--single-transaction* WILL NOT PROTECT MyISAM
 tables from live changes being written to the dump file since you cannot run
 ACID compliant transactions against MyISAM, only InnoDB.



 Doing FLUSH TABLES WITH READ LOCK on the master prior to the mysqldump will
 guarantee that no transactions, regardless of whether it is for MyISAM or
 InnoDB, will come through during a mysqldump.


  --

 *From:* Mark Maunder [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 05, 2008 12:17 PM
 *To:* Rolando Edwards
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: Possible bug in mysqldump?



 Thanks for the reply Rolando.

 In both the examples I provided (pipe and text file) the CHANGE MASTER
 command appears at the top of the data import and is uncommented and
 therefore executes before the data is imported. I don't think this is a
 problem because the slave only starts replicating from the master once I run
 the start slave command. That command is only run after all data is
 imported.

 Unless the slave does some kind of processing before I run start slave I
 don't see this is the explanation.

 Thanks again - and please let me know your thoughts on this because I could
 be wrong.

 Mark.

 On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards [EMAIL PROTECTED]
 wrote:

 When you use --master-data=1, it executes the CHANGE MASTER command first
 before adding data.

 Do the following to verify this:

 Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... 
 DataDump1.sql
 Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... 
 DataDump2.sql

 Run 'head -30 DataDump1.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 Therefore, it will execute.

 Run 'head -30 DataDump2.sql'
 You will see the CHANGE MASTER command before all CREATE TABLEs and
 INSERTs.
 However, the command is commented Out !!!
 Therefore, it will not execute.

 After loading DataDump2.sql, you can then use the replication coordinates
 (log file name and log position) in the Commented Out CHANGE MASTER Command
 After the data are loaded.

 In theory, it is a paradigm bug because the CHANGE MASTER command when
 using --master-data=1 should appear on the bottom of the mysqldump and not
 at the top. Yet, it is at the top and executes immediately and then tries to
 load your data and read from the master's binary logs at the same time,
 guaranteeing duplicate key collision.

 This is why importing mysqldump straight to mysql via a pipe produces the
 error you are experiencing.

 Try this:

 1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

 2) In mysql session 1, run SHOW MASTER STATUS.

 3) Record the log file and position from mysql session 1.

 4) In mysql seesion 2, run 'STOP SLAVE;'

 5) Run 'mysqldump 

Install Microsoft.Jet

2008-08-05 Thread Sivasakthi

Hi all,

I have tried to import the excel to db , but i get the following error,

The OLE DB provider Microsoft.Jet.OLEDB.4.0 has not been registered.

how can i install the Microsoft.Jet?

System Info:
OS Name Microsoft(R) Windows(R) Server 2003, Enterprise Edition for 
64-Bit Itanium-based Systems

System Type Itanium (TM) -based System
Processor ia64 Family 31 Model 1 Stepping 5 GenuineIntel ~1300



Thanks,
Siva

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]