Re: help with replication

2010-08-18 Thread Norman Khine
thanks

On Wed, Aug 18, 2010 at 10:42 PM,   wrote:
> Quoting Norman Khine :
>
>
>>>
>>> What is shown from "show master status" and "show slave status" after you
>>> have made a change on the master DB?
>>
>> this is the output:
>>
>> http://pastie.org/1100610
>>
>> it does not seem to have any changes and "show slave status" is just
>> empty.
>>
>> have i missed to add something to the master's /etc/mysql/my.cnf options?
>>>
> So you can see the binlog position on the master has not changed, hence you
> wont get any changes replicated to the slave. The show slave status has to
> be run on the slave not the master.
> Anyway, I believe your problem is your binlog-do-db section on the master,
> and also the ignore sections, I think these need to be broken into seperate
> lines, with only one value per line. ie:
>
> binlog-do-db = upgrade
> binlog-do-db = tracker
this was the issue
>
> You´ll need to completely resetup the syncronsiation after this, as you
> currently have out of sync DBs and no data stored in your binlogs...
>
>
>
>
>



-- 
˙uʍop ǝpısdn p,uɹnʇ pןɹoʍ ǝɥʇ ǝǝs noʎ 'ʇuǝɯɐן sǝɯıʇ ǝɥʇ puɐ 'ʇuǝʇuoɔ
ǝq s,ʇǝן ʇǝʎ
%>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or
chr(97+(ord(c)-83)%26) for c in ",adym,*)&uzq^zqf" ] )

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Xn Nooby
>From what I have read, ALTER TABLE to add an index causes the entire
table to be duplicated, so wouldn't my ALTER TABLE command be
duplicating the work done by the SELECT command?



On Wed, Aug 18, 2010 at 4:50 PM, mos  wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying.  It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built.  Is the below code what you were suggesting?  I had
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>>  make the_table_clone from the the_table
>>  drop the indices on the_table_clone
>>  copy the row from the_table to the_table_clone
>>  add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=innodb select * from the_table limit=0;
> This will create the same table structure but not the indexes so you don't
> have to drop the indexes below.
>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if you
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,1000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment the
> offset by the number of rows that have been added.
>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all this
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number of
> rows have been copied over and the data is in the correct columns.
>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>>  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>>  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>>  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>>  MODIFY id INT SIGNED AUTO_INCREMENT,\
>>  ADD PRIMARY KEY(col1);
>
> Correct.
>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Xn Nooby
It appears the ALTER TABLE starts off quick, and then slows down.  I
feel like the indices are larger than allocated RAM, and the system is
slowing down because it is busy swapping out to disk.  Is there an
InnoDB specific buffer than can help this?  The "sort_buffer_size"
apparently is only for ISAM files.

My "InnoDB specific" settings are:

innodb_additional_mem_pool_size=13M
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=7M
innodb_buffer_pool_size=616M
innodb_log_file_size=24M
innodb_thread_concurrency=10



On Wed, Aug 18, 2010 at 4:50 PM, mos  wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying.  It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built.  Is the below code what you were suggesting?  I had
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>>  make the_table_clone from the the_table
>>  drop the indices on the_table_clone
>>  copy the row from the_table to the_table_clone
>>  add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=innodb select * from the_table limit=0;
> This will create the same table structure but not the indexes so you don't
> have to drop the indexes below.
>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if you
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,1000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment the
> offset by the number of rows that have been added.
>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all this
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number of
> rows have been copied over and the data is in the correct columns.
>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>>  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>>  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>>  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>>  MODIFY id INT SIGNED AUTO_INCREMENT,\
>>  ADD PRIMARY KEY(col1);
>
> Correct.
>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Xn Nooby
Hi Mike, my comments are below:

On Wed, Aug 18, 2010 at 4:50 PM, mos  wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying.  It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built.  Is the below code what you were suggesting?  I had
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>>  make the_table_clone from the the_table
>>  drop the indices on the_table_clone
>>  copy the row from the_table to the_table_clone
>>  add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=innodb select * from the_table limit=0;
> This will create the same table structure but not the indexes so you don't
> have to drop the indexes below.


That is good to know.  I did not mind dropping the indices in this
case, because the table was still empty.


>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if you
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,1000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment the
> offset by the number of rows that have been added.


The 2nd INSERT is disabled with the # character.  I am using the
statement with the LIMIT for testing, and will switch to the other
command when I want to process all the records.


>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all this
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number of
> rows have been copied over and the data is in the correct columns.


Okay.  I thought it was safe to assume that the new columns would
appear on the "right-side" of the column list.


>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>>  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>>  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>>  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>>  MODIFY id INT SIGNED AUTO_INCREMENT,\
>>  ADD PRIMARY KEY(col1);
>
> Correct.


The insert took 7 minutes on 10M rows, and that ALTER command took
another 46 minutes.


>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread mos

At 02:52 PM 8/18/2010, Xn Nooby wrote:

Below is a generic version of the code I am trying.  It does copy the
rows very quickly, but I will have to test to see how quickly the
indices are built.  Is the below code what you were suggesting?  I had
a little trouble dropping and later adding the primary index, but I
think I got it figured out.

Below I basically do this:
  make the_table_clone from the the_table
  drop the indices on the_table_clone
  copy the row from the_table to the_table_clone
  add the indices back to the_table_clone

If this runs fast enough, I will then drop the_table, and rename
the_table_clone to the_table


USE the_database;


DROP TABLE IF EXISTS the_table_clone;

CREATE TABLE the_table_clone LIKE the_table;


Or you can try something like:

create table the_table_clone engine=innodb select * from the_table limit=0;
This will create the same table structure but not the indexes so you don't 
have to drop the indexes below.





# drop minor indices on clone

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;


# drop primary index on clone

ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;


You still need the statement above to change the autoinc to integer if you 
use my Create Table... statement above.





ALTER TABLE the_table_clone DROP PRIMARY KEY;


# add 2 new columns to clone

ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
price_amount float DEFAULT '0';


# copy rows

INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT 0,1000;

#INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;


Why do you have two insert statements? If you are inserting a group of 
records at a time then you need a limit statement on each, and increment 
the offset by the number of rows that have been added.


I would explicitly specify the column list for both the Insert and the 
Select to make sure they match up. There is no point going through all this 
if it inserts the data into the wrong columns!
Check the data before creating the indexes to make sure the same number of 
rows have been copied over and the data is in the correct columns.





# Add back indices in one command (for max speed)

ALTER TABLE the_table_clone \
  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
  MODIFY id INT SIGNED AUTO_INCREMENT,\
  ADD PRIMARY KEY(col1);


Correct.

Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: help with replication

2010-08-18 Thread a . smith

Quoting Norman Khine :




What is shown from "show master status" and "show slave status" after you
have made a change on the master DB?

this is the output:

http://pastie.org/1100610

it does not seem to have any changes and "show slave status" is just empty.

have i missed to add something to the master's /etc/mysql/my.cnf options?


So you can see the binlog position on the master has not changed,  
hence you wont get any changes replicated to the slave. The show slave  
status has to be run on the slave not the master.
Anyway, I believe your problem is your binlog-do-db section on the  
master, and also the ignore sections, I think these need to be broken  
into seperate lines, with only one value per line. ie:


binlog-do-db = upgrade
binlog-do-db = tracker

You´ll need to completely resetup the syncronsiation after this, as  
you currently have out of sync DBs and no data stored in your binlogs...






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Xn Nooby
Below is a generic version of the code I am trying.  It does copy the
rows very quickly, but I will have to test to see how quickly the
indices are built.  Is the below code what you were suggesting?  I had
a little trouble dropping and later adding the primary index, but I
think I got it figured out.

Below I basically do this:
  make the_table_clone from the the_table
  drop the indices on the_table_clone
  copy the row from the_table to the_table_clone
  add the indices back to the_table_clone

If this runs fast enough, I will then drop the_table, and rename
the_table_clone to the_table


USE the_database;


DROP TABLE IF EXISTS the_table_clone;

CREATE TABLE the_table_clone LIKE the_table;


# drop minor indices on clone

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;


# drop primary index on clone

ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;

ALTER TABLE the_table_clone DROP PRIMARY KEY;


# add 2 new columns to clone

ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
price_amount float DEFAULT '0';


# copy rows

INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT 0,1000;

#INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;


# Add back indices in one command (for max speed)

ALTER TABLE the_table_clone \
  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
  MODIFY id INT SIGNED AUTO_INCREMENT,\
  ADD PRIMARY KEY(col1);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Yves Goergen
On 18.08.2010 20:42 CE(S)T, Mark Matthews wrote:
> For what it's worth, the MySQL JDBC driver has had client-side SSL
> require (i.e. "requireSSL=true") since 2003 and the ADO.Net driver
> has had "SSL Mode=Required" since 2009.

Cool, so would it be possible to also have this in the MySQL Workbench
client which seems to be .NET-based?

-- 
Yves Goergen "LonelyPixel" 
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Next MySQL GA release

2010-08-18 Thread Marten Lehmann

Hi,

does anyone know when the GA release on a 5.5 or 5.6 trunk is planned? I 
noticed that the versions switched from 5.5.4-beta to 5.5.5-m3 in only a 
few weeks. But it happened so often the after milestone was followed by 
another beta release instead of the next GA release, that I have no clue 
what will happen after 5.5.5-m3.


So in short: Is there any chance that a 5.5 or 5.6 GA release will be 
ready until end of September, or should I setup a new server with 5.1.x 
still?


Kind regards
Marten Lehmann

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: help with replication

2010-08-18 Thread Norman Khine
hi

On Wed, Aug 18, 2010 at 8:00 PM,   wrote:
> Quoting Norman Khine :
>
>> hello,
>> i have an issue with the replication here is my procedure:
>>
>> http://pastie.org/1100368
>>
>> in the log it shows replication works, but when i update a record this
>> is not updated on the server.
>
> What is shown from "show master status" and "show slave status" after you
> have made a change on the master DB?
this is the output:

http://pastie.org/1100610

it does not seem to have any changes and "show slave status" is just empty.

have i missed to add something to the master's /etc/mysql/my.cnf options?
>
>
>
>



-- 
˙uʍop ǝpısdn p,uɹnʇ pןɹoʍ ǝɥʇ ǝǝs noʎ 'ʇuǝɯɐן sǝɯıʇ ǝɥʇ puɐ 'ʇuǝʇuoɔ
ǝq s,ʇǝן ʇǝʎ
%>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or
chr(97+(ord(c)-83)%26) for c in ",adym,*)&uzq^zqf" ] )

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Mark Matthews

On Aug 18, 2010, at 1:34 PM, Shawn Green (MySQL) wrote:

> On 8/18/2010 2:22 PM, Anders Kaseorg wrote:
>> On Wed, 18 Aug 2010, Shawn Green (MySQL) wrote:
>>> If the server specifies REQUIRES SSL then that client cannot connect 
>>> without going through the full SSL validation process. This means that 
>>> Mallory would need to present the same security credentials that Alice has 
>>> in order to qualify as a secure user (the same certs, same password, login 
>>> from the correct host, etc).
>> Mallory got the username and hashed password from Alice over the unencrypted 
>> connection, and we assume that Mallory, like any good MITM, has the ability 
>> to intercept and forge traffic for arbitrary hosts.  So this attack goes 
>> through against anyone using passwords over SSL.  This already constitutes a 
>> vulnerability.
>> Setting up client certificates does help to prevent this form of attack 
>> where Mallory tries to issue evil commands to Bob.  It does not, however, 
>> prevent the attack where Mallory ignores Bob, and uses only the unencrypted 
>> connection to steal data from Alice or poison her with false data.  This 
>> also constitutes a vulnerability, which, as far as I can see, cannot be 
>> prevented in any way with the current MySQL software.
>>> Your redirect has pointed out to me what I missed in Yves's first post. In 
>>> order for the client to require an SSL connection, you have to designate a 
>>> certificate for it to use for the connection.
>> No, that doesn’t work either!  Against a server with SSL disabled:
>> $ mysql --ssl --ssl-verify-server-cert \
>>--ssl-ca=/etc/ssl/certs/ca-certificates.crt \
>>--ssl-cert=Private/andersk.pem \
>>--ssl-key=Private/andersk.pem \
>>-h MY-SERVER
>> Welcome to the MySQL monitor.  Commands end with ; or \g.
>> …
>> mysql> \s
>> --
>> mysql  Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using 
>> readline 6.1
>> …
>> SSL: Not in use
>>> From the same page but a few lines above the line he quoted
>>> ##
>>> This option is not sufficient in itself to cause an SSL connection to be 
>>> used.
>>> You must also specify the --ssl-ca option, and possibly the --ssl-cert and
>>> --ssl-key options.
>>> ##
>> This documentation appears to be wrong.
>> Anders
> 
> Excellent logic.
> 
> I have updated bug #3138 with a private comment to explain your presentation 
> of the vulnerability.
> http://bugs.mysql.com/bug.php?id=3138

Shawn, Anders, Yves,

For what it's worth, the MySQL JDBC driver has had client-side SSL require 
(i.e. "requireSSL=true") since 2003 and the ADO.Net driver has had "SSL 
Mode=Required" since 2009.

-Mark
-- 
Mark Matthews
Principal Software Developer -  MySQL Enterprise Tools
Oracle
http://www.mysql.com/products/enterprise/monitor.html








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: idle query

2010-08-18 Thread Shawn Green (MySQL)

On 8/12/2010 2:32 PM, Mike Spreitzer wrote:
I also find that if I have both tables in MyISAM and use STRAIGHT_JOIN to 
force the better query plan (enumerate the longer table, for each longer 
table row use the shorter table's index to pick out the one right matching 
row from the shorter table) then the server has low I/O utilization but 
the CPU utilization is about as high as can be expected for a single query 
running on a 16-CPU machine.  Why should this thing be CPU-bound?  Here is 
the query:


create table fp2 (p VARCHAR(200) NOT NULL,
   rcd DATETIME NOT NULL, rcms SMALLINT NOT NULL, msgid BIGINT NOT 
NULL,
   q VARCHAR(200) NOT NULL, scd DATETIME NOT NULL, scms SMALLINT NOT 
NULL,

   lat DECIMAL(14,3),
   INDEX p(p), INDEX q(q) )
   AS SELECT fldrcv.p, fldrcv.cd as rcd, fldrcv.cms as rcms,
   fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd, fldsnd.cms as 
scms,
   TIMESTAMPDIFF(SECOND, fldsnd.cd, fldrcv.cd) + 
(fldrcv.cms-fldsnd.cms)/1000 as lat

   FROM fldrcv STRAIGHT_JOIN fldsnd
   ON fldrcv.q=fldsnd.p AND fldrcv.qboot=fldsnd.pboot
   AND fldrcv.msgid=fldsnd.msgid;

and here is some `iostat -x 5` output that shows a total of less than 50% 
I/O utilization and about 15/16 CPU utilization:


...



You are doing a lot of index work which requires a lot of memory 
manipulation. You are populating two on the new table while using at 
least one to build your data. I believe it's that random accesss memory 
work that's chewing up a big chunk of your CPU time.


Does it work better if you delay the index creation of your temporary 
table until after the table is populated?


CREATE TABLE fp2 ... SELECT ... ;
ALTER TABLE fp2 ADD KEY p(p),KEY q(q);

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Shawn Green (MySQL)

On 8/18/2010 2:22 PM, Anders Kaseorg wrote:

On Wed, 18 Aug 2010, Shawn Green (MySQL) wrote:
If the server specifies REQUIRES SSL then that client cannot connect 
without going through the full SSL validation process. This means that 
Mallory would need to present the same security credentials that Alice 
has in order to qualify as a secure user (the same certs, same password, 
login from the correct host, etc).


Mallory got the username and hashed password from Alice over the 
unencrypted connection, and we assume that Mallory, like any good MITM, 
has the ability to intercept and forge traffic for arbitrary hosts.  So 
this attack goes through against anyone using passwords over SSL.  This 
already constitutes a vulnerability.


Setting up client certificates does help to prevent this form of attack 
where Mallory tries to issue evil commands to Bob.  It does not, however, 
prevent the attack where Mallory ignores Bob, and uses only the 
unencrypted connection to steal data from Alice or poison her with false 
data.  This also constitutes a vulnerability, which, as far as I can see, 
cannot be prevented in any way with the current MySQL software.


Your redirect has pointed out to me what I missed in Yves's first post. 
In order for the client to require an SSL connection, you have to 
designate a certificate for it to use for the connection.


No, that doesn’t work either!  Against a server with SSL disabled:

$ mysql --ssl --ssl-verify-server-cert \
--ssl-ca=/etc/ssl/certs/ca-certificates.crt \
--ssl-cert=Private/andersk.pem \
--ssl-key=Private/andersk.pem \
-h MY-SERVER
Welcome to the MySQL monitor.  Commands end with ; or \g.
…
mysql> \s
--
mysql  Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using readline 
6.1
…
SSL:Not in use


From the same page but a few lines above the line he quoted
##
This option is not sufficient in itself to cause an SSL connection to be used.
You must also specify the --ssl-ca option, and possibly the --ssl-cert and
--ssl-key options.
##


This documentation appears to be wrong.

Anders


Excellent logic.

I have updated bug #3138 with a private comment to explain your 
presentation of the vulnerability.

http://bugs.mysql.com/bug.php?id=3138

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Anders Kaseorg
On Wed, 18 Aug 2010, Shawn Green (MySQL) wrote:
> If the server specifies REQUIRES SSL then that client cannot connect 
> without going through the full SSL validation process. This means that 
> Mallory would need to present the same security credentials that Alice 
> has in order to qualify as a secure user (the same certs, same password, 
> login from the correct host, etc).

Mallory got the username and hashed password from Alice over the 
unencrypted connection, and we assume that Mallory, like any good MITM, 
has the ability to intercept and forge traffic for arbitrary hosts.  So 
this attack goes through against anyone using passwords over SSL.  This 
already constitutes a vulnerability.

Setting up client certificates does help to prevent this form of attack 
where Mallory tries to issue evil commands to Bob.  It does not, however, 
prevent the attack where Mallory ignores Bob, and uses only the 
unencrypted connection to steal data from Alice or poison her with false 
data.  This also constitutes a vulnerability, which, as far as I can see, 
cannot be prevented in any way with the current MySQL software.

> Your redirect has pointed out to me what I missed in Yves's first post. 
> In order for the client to require an SSL connection, you have to 
> designate a certificate for it to use for the connection.

No, that doesn’t work either!  Against a server with SSL disabled:

$ mysql --ssl --ssl-verify-server-cert \
--ssl-ca=/etc/ssl/certs/ca-certificates.crt \
--ssl-cert=Private/andersk.pem \
--ssl-key=Private/andersk.pem \
-h MY-SERVER
Welcome to the MySQL monitor.  Commands end with ; or \g.
…
mysql> \s
--
mysql  Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using readline 
6.1
…
SSL:Not in use

> From the same page but a few lines above the line he quoted
> ##
> This option is not sufficient in itself to cause an SSL connection to be used.
> You must also specify the --ssl-ca option, and possibly the --ssl-cert and
> --ssl-key options.
> ##

This documentation appears to be wrong.

Anders

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: help with replication

2010-08-18 Thread a . smith

Quoting Norman Khine :


hello,
i have an issue with the replication here is my procedure:

http://pastie.org/1100368

in the log it shows replication works, but when i update a record this
is not updated on the server.


What is shown from "show master status" and "show slave status" after  
you have made a change on the master DB?





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Travis Ard
What are you using as your primary key on this table?  Is an auto_increment
field or something non-sequential?  Do you have your secondary indexes in
place while you load the table or are you explicitly disabling them and
re-enabling them afterward? 

-Travis

-Original Message-
From: Xn Nooby [mailto:xno...@gmail.com] 
Sent: Wednesday, August 18, 2010 9:34 AM
To: mysql@lists.mysql.com
Subject: Slow ALTER TABLE on 70M row InnoDB table

I have been trying to speed up an ALTER TABLE command that adds a
column to a large InnoDB table of about 80M rows.

I have found and tried many different methods, but they are all slow.I
have tried both optimizing the ALTER TABLE

command, and dumping and loading the table (in both SQL and CSV
formats). The table size is about 10GB, and the

combined index size is about 6GB. I am trying to understand why it is slow.

I have read that dumping and loading in the CSV format should be the
absolute fastest, and it does only take 20

minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My

understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I

don't understand why it takes so long.

I have read that breaking a SQL dump in to "chunks" is also supposed
to be fast, but later chunks insert more slowly

than earlier chunks. This is with keys disabled, and other options disabled.

Ideally I could stick with the ALTER TABLE command, and I have played
around with a lot of the buffer settings. My

understanding is, any enabled key indices need to fit in to RAM, and I
have played around with a lot of those

settings. Mainly I have increased the buffer size and log buffer size.

When importing records, I see the "free buffers" slowly run-out, and
the import speed drops off when the buffers are

used up. The first few million rows import at up to 30k rows per
second, but eventually it slows to a crawl.  I have

read a lot about this on the mysqlperformance blog.

There is a lot of information on the web about this topic, but I am
not always sure which parts are for ISAM and

which apply to InnoDB. I have not experimented with ISAM, since my
tables are InnoDB.

This process is slow on a larger box, which belongs to someone else,
and on my own desktop PC.

Should I stick with trying to make ALTER TABLE work, or should I be
trying to get LOAD FILE to work?

Any suggestions on adding a column to a large table?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=travis_...@hotmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



help with replication

2010-08-18 Thread Norman Khine
hello,
i have an issue with the replication here is my procedure:

http://pastie.org/1100368

in the log it shows replication works, but when i update a record this
is not updated on the server.

any help much appreciated.

-- 
˙uʍop ǝpısdn p,uɹnʇ pןɹoʍ ǝɥʇ ǝǝs noʎ 'ʇuǝɯɐן sǝɯıʇ ǝɥʇ puɐ 'ʇuǝʇuoɔ
ǝq s,ʇǝן ʇǝʎ
%>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or
chr(97+(ord(c)-83)%26) for c in ",adym,*)&uzq^zqf" ] )

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread mos

At 10:34 AM 8/18/2010, Xn Nooby wrote:


minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My

understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I

don't understand why it takes so long.


A common misconception about Load Data Infile command is that it will 
rebuild the indexes later only if you are loading data into an empty table 
(which you probably are) and only for the non-unique indexes. The Unique 
and Primary indexes are still being built during the loading of the data 
and I suspect this is why it is slowing down over time.


Before loading the data, I would recommend removing ALL of the indexes. 
Then after the data has been loaded, issue a single Alter statement to 
rebuild all of the indexes in this one command.  See if that makes a 
difference.


The other alternative is to create another table with the new table 
structure but without the indexes. Then do a


insert into newtable select * from oldtable;

and then create the indexes on the new table with a single Alter statement.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Server has gone away

2010-08-18 Thread jitendra ranjan
Hi,
 
Whenever i run any commnd on mysql it gives message as below then gives the 
result successfully. What is the reason of the below error message :
 
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    264550
Current database: *** NONE ***
 
 
Thanks in advance



Slow ALTER TABLE on 70M row InnoDB table

2010-08-18 Thread Xn Nooby
I have been trying to speed up an ALTER TABLE command that adds a
column to a large InnoDB table of about 80M rows.

I have found and tried many different methods, but they are all slow.I
have tried both optimizing the ALTER TABLE

command, and dumping and loading the table (in both SQL and CSV
formats). The table size is about 10GB, and the

combined index size is about 6GB. I am trying to understand why it is slow.

I have read that dumping and loading in the CSV format should be the
absolute fastest, and it does only take 20

minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My

understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I

don't understand why it takes so long.

I have read that breaking a SQL dump in to "chunks" is also supposed
to be fast, but later chunks insert more slowly

than earlier chunks. This is with keys disabled, and other options disabled.

Ideally I could stick with the ALTER TABLE command, and I have played
around with a lot of the buffer settings. My

understanding is, any enabled key indices need to fit in to RAM, and I
have played around with a lot of those

settings. Mainly I have increased the buffer size and log buffer size.

When importing records, I see the "free buffers" slowly run-out, and
the import speed drops off when the buffers are

used up. The first few million rows import at up to 30k rows per
second, but eventually it slows to a crawl.  I have

read a lot about this on the mysqlperformance blog.

There is a lot of information on the web about this topic, but I am
not always sure which parts are for ISAM and

which apply to InnoDB. I have not experimented with ISAM, since my
tables are InnoDB.

This process is slow on a larger box, which belongs to someone else,
and on my own desktop PC.

Should I stick with trying to make ALTER TABLE work, or should I be
trying to get LOAD FILE to work?

Any suggestions on adding a column to a large table?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Moving from one MySQL server to three MySQL servers?

2010-08-18 Thread Todd Lyons
On Thu, Aug 5, 2010 at 8:35 AM, Nunzio Daveri  wrote:
>
> The server in this case is a stand alone with nothing more then CentOS and 
> MySQL
> 5.1.44 on it.  The drives are sas 10K rpm drives.  The problem I see is that
> when you stress test the server (typically by running loads of reports -
> selects, joins) the machine hits 98% cpu and leaves only 800mb of free RAM out
> of the 16 GB of which I told it to allocate 12GB for Innodb in my.cnf.  Once 
> the
> server sucks up all the memory when we are stress testing it, it holds the 12 
> gb
> as hostage and refuses to release it back into the pool, regardless of weather

We had a similar system where the innodb data dictionary (poor-man's
description is an in-memory map of all the tables it has opened up to
that point) consumed all free memory and the kernel started killing
processes to get some back.  At the time, stock mysql did not have the
ability to limit the size of the data dictionary memory usage (and I
don't see it as an available option for 5.1.x currently), so I tested
XtraDB and it prevented the memory usage from growing without bound.
XtraDB also gave us lots of knobs to twiddle and extra insights into
what innodb was doing.  I've not experimented with current mysql 5.1.x
to see how much of that extra control has made it into the mainline.

> only 800mb of ram free???  If I start to stress it again then it starts to go
> into swap.  Really weird, thus wanting to split the load onto 3 machines.

When you restarted the stress test, did it access new tables or the
same ones that the previous stress test did?  What happens if you tell
innodb to use less memory and do your tests?  Have you found any way
to make mysql not run out of memory?

-- 
Regards...      Todd
I seek the truth...it is only persistence in self-delusion and
ignorance that does harm.  -- Marcus Aurealius

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Problem with mytop

2010-08-18 Thread Carlos Eduardo Caldi

Hi Baron

This tool works better than mytop, you solved my problem, thanks a lot


Carlos



> Date: Tue, 17 Aug 2010 10:41:23 -0400
> Subject: Re: Problem with mytop
> From: ba...@xaprb.com
> To: mysql@lists.mysql.com
> 
> Carlos,
> 
> Have you tried innotop instead?  It's a better replacement for mytop.
> (I wrote it.)
> 
> - Baron
> 
> On Tue, Aug 17, 2010 at 8:50 AM, Carlos Eduardo Caldi
>  wrote:
> >
> >
> > If sombody can help me I'll be gratefull
> >
> > I have a problem with mytop1.6 on Mandriva 2010.1, when I install it on a 
> > Slave server mytop works,
> > but when I install it on a Master Server don't works, don't display the 
> > queries.
> >
> > I use mysql version 5.0.77
> > and linux mandriva 2010.1
> >
> > Very Thanks
> >
> > Carlos Caldi -  DBA
> 
> 
> 
> -- 
> Baron Schwartz
> Percona Inc 
> Consulting, Training, Support & Services for MySQL
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=ce_ca...@hotmail.com
> 
  

Is this the right export / import command for all databases and users?

2010-08-18 Thread Nunzio Daveri
Hi all, I have upgraded a few test boxes and everything seems to work fine BUT 
I 
wanted to verify with the gurus if my syntax is correct so as to avoid any 
future problems ;-)

The purpose is to dump all databases and users / user privileges from our 
4.1.20 
server and import it into our 5.1.49 server.  On the 4.x server I ran:

# mysqldump -A -uroot -p | gzip -4 > ./4.x_mysqldump.sql.gz

Once done and I have MySQL 5.x running on the same box, I type:

# gzip -dc 4.x_mysqldump.sql.gz | mysql -uroot -p

Question: So with these 2 commands I export all the databases and user's and 
user's privileges and then import the databases and user's and user's 
privileges 
into 5.x?

Question: Will the above export and import change the 5.x mysql root password 
which is different than the 4.x root password?

Question: Will the above export the mysql database (inside mysql 4.x) from 4.x 
and re-import it into 5.X (the database that controls users and access and 
...)?  If so is this okay to do since it's a version upgrade?

Thanks in advance...

Nunzio



  

Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Shawn Green (MySQL)

On 8/17/2010 6:13 PM, Yves Goergen wrote:

... snip ...

(Oh look, the "MySQL" guy already has an oracle.com e-mail address...)



And for a for about two years before that, I had a sun.com email 
address, too. MySQL has not been an independent company for quite a 
while.  Google it if you don't believe me. Sun made many headlines when 
they bought MySQL for one billion (10) US Dollars.


I still work for MySQL (the combined products) even if there no longer 
is a MySQL, Inc. (the company).

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: OpenOffice, Go-OO, ODBC, Offline Data Entry

2010-08-18 Thread Jerry Schwartz
I deal with a somewhat similar situation. Even though we have fast VPN 
connections among our various offices, each has been afflicted with a 
different database structure (and software) which they cannot change.

What I suggest you do is use the kind of "pseudo-synchronization" that we do. 
Use a local copy of the application and database on each PC (MySQL will do 
fine on even a modest system). Timestamp each record when you create or change 
it.

When the user is back in contact with the office, extract all of the records 
with timestamps newer than the last "synchronization" event and update the 
central database.

Is this foolproof? Absolutely not, if there are conflicts between the changes 
by different users. You'll be stuck with "He who write last, writes best"; but 
I think that's as good as it's going to get for you.

How well this works depends upon the type of work. If the users have 
non-overlapping "customers", or whatever, then it won't be too bad. You'll 
have to judge for yourself.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Lord_Devi [mailto:lordd...@gmail.com]
>Sent: Wednesday, August 18, 2010 1:43 AM
>To: mysql@lists.mysql.com
>Subject: OpenOffice, Go-OO, ODBC, Offline Data Entry
>
>Hello,
>
> I am new to the mailing list here, so I'd like to introduce myself
>quickly. My name is Casey Quibell, and I am also rather new to the
>concept of programming a database (Or in this case, more specifically
>using an ODBC connector to tie OpenOffice into it.)
>
> Just a bit of technical preamble. While I am currently implementing
>this solution using Go-OO and MySQL, I am not tied to them. I like
>PostgreSQL as well, and there is little wrong with OpenOffice. Hell, if
>there are recommendations to be made even beyond these software
>selections I am listening.
>
> I have a project I have been commissioned to complete, and while the
>majority of it seems very simple and straight-forward to me, there
>remains an issue that has me concerned. Simply put, this agency which
>needs a database, has a large number of paper forms they have to fill
>out on a regular basis. Many of these forms ask the same questions over
>and over again. Using an ODBC connector and OpenOffice, I am able to
>create .pdf files that resemble very closely the forms they are already
>used to. By entering information into these forms, they are able to have
>the information parse directly into a SQL database which they can then
>create reports from later (A very important ability for them).
>
> In addition they are able to bring up an empty form to fill out, and if
>the relevant individual being processed is already in the system, it
>will fill out most of the 2nd form's fields for them; leaving only the
>remaining fields which are new and specific to the 2nd form. All of this
>is a massive time saver for them.
>
> Here is the catch. The SQL database itself exists at a 'home office',
>and these workers are wanting to be able to enter this data remotely; in
>an 'offline mode' as it were. Because they are required to fill out
>these forms away from the office, I have a logistical problem. If they
>could have WIFI or Internet access while away from the office, I would
>simply configure a VPN solution for them. However this is not something
>I can do; where they must go frequently has no Internet access at all,
>and a cell phone tether with a dataplan would just be inordinately
>expensive for this particular group.
>
> What options do I have available to me? I realize that while these
>workers are on-site, there is likely very little that can be done as far
>as the forms 'pulling' data from the office SQL database to auto-fill
>fields, but what about the other way around? I.e. Them going off-site,
>filling out the necessary forms, and then once they can get back to the
>office, upload the data to the database then?
>
> I don't know if maybe having a local SQL database running on each
>worker laptop which could somehow 'sync' the data would be a viable
>option or not... That is something I have never tried before. One option
>as far as that route goes that I have found is an application called
>Pervasync: http://www.pervasync.com/ which claims to be able to sync
>database material in this manner. However, it is commercial, and I am
>quite devoted to using only Open Source software.
>
> So far, it almost seems like I might have to tell the clients that it
>is simply not even possible what they are asking for. That they may have
>to enter the data twice: Once on-site, and a 2nd time when they get back
>to the office. Essentially copying the offline data, into the 'live'
>forms essentially.
>
> Thank you for your time and consideration. I appologize if my post was
>a little long, but I really wanted to try and be as clear as I could be
>as to my over-all intent, as 

Re: How to use SSL? (SSL is enabled but not used)

2010-08-18 Thread Shawn Green (MySQL)

On 8/17/2010 6:02 PM, Anders Kaseorg wrote:

On Wed, 2010-08-11 at 14:23 -0400, Shawn Green (MySQL) wrote:

On 8/9/2010 5:27 PM, Yves Goergen wrote:

What's that supposed to mean? If there's no way to force the connection
into SSL, it is entirely useless. Anyone on the wire could simply
pretend that the server doesn't support SSL and so deny the encryption
and the client wouldn't even care... 

If you don't want to require SSL on the local connections then don't
set the flag on the @localhost account.

If you want the SSL required on the other connections, then set it on
the @'...' version of the account that the remote users login through.


Excuse me, but isn’t Yves exactly right here?

None of the client-side options (I tried --ssl, --ssl-ca=…,
--ssl-verify-server-cert, --ssl-key=…, --ssl-cipher=…) can currently be
used to force an SSL connection to be used.  And requiring SSL from the
server side does nothing to stop man-in-the-middle attacks.

(Suppose Bob the SQL server grants some privileges to Alice the user
with SSL required.  Now Alice can log in with her password over SSL and
gets denied over non-SSL.  Great.

But now Mallory comes along and intercepts a connection from Alice
intended for Bob.  Even if Bob would have claimed that he requires SSL,
nothing stops Mallory from claiming that she doesn’t require SSL.
Because Alice cannot force the use of SSL from the client side, Alice
will make a successful unencrypted connection to Mallory.  Then Mallory
can accept the connection, ignoring Alice’s authentication, and steal
Alice’s data; or Mallory can make a separate SSL connection to Bob,
forward Alice’s authentication over it, then take over and issue evil
commands to Bob.)

This same issue was reported back in 2004 and ignored:
http://bugs.mysql.com/bug.php?id=3138

I think this is a serious security problem that demands more attention
than dismissal as documented behavior.  To solve it, there needs to be a
way to force the use of SSL from the client side.

Anders


If the server specifies REQUIRES SSL then that client cannot connect 
without going through the full SSL validation process. This means that 
Mallory would need to present the same security credentials that Alice 
has in order to qualify as a secure user (the same certs, same password, 
login from the correct host, etc).


Your redirect has pointed out to me what I missed in Yves's first post. 
In order for the client to require an SSL connection, you have to 
designate a certificate for it to use for the connection.


From the same page but a few lines above the line he quoted
##
This option is not sufficient in itself to cause an SSL connection to be 
used. You must also specify the --ssl-ca option, and possibly the 
--ssl-cert and --ssl-key options.

##

So you can see that using just the --ssl option by itself is not 
specific enough to designate which certificate a client is designated to 
use. You can specify these options either on the command line or in the 
configuration file used by the client (put them in the [mysql] section).


In your simulated MITM attack, if Alice was using the specific --ssl-* 
options on her connection attempt, then Mallory would need to hold the 
server-side equivalents in order to pose as the server. Since the 
likelihood of this is small (except in the case of a physical hack of 
the server's cert files which no amount of digital manipulation can 
really avoid) then Mallory would be unable to pose as Bob and the 
intercept would fail.


Does this help close the security hole you perceived in our SSL 
implementation?


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org