Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Joerg Bruehe
Hi everybody!


Shawn Green (MySQL) wrote:
 On 1/21/2011 14:21, Kendall Gifford wrote:
 Hello everyone, I've got a database on an old Fedora Core 4 server
 running
 MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
 has
 just two (InnoDB) tables:

 messages (approx 2.5 million records)
 recipients (approx 6.5 million records)

 [[ ... see the original post for the schema details ... ]]


 I have the following query that is just too slow:

 SELECT messages.* FROM messages
 INNER JOIN recipients ON recipients.message_id = messages.id
 WHERE recipients.employee_id = X
 GROUP BY messages.id
 ORDER BY sent_at DESC
 LIMIT 0, 25;

 This takes about 44 seconds on average. [[...]]

 
 You need to get rid of the GROUP BY to make this go faster. You can do
 that by running two queries, one to pick the list of unique
 recipients.message_id values that match your where condition then
 another to actually retrieve the message data. [[...]]

I don't want to contradict Shawn, but currently I fail to see the need
for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

can return only one row, unless there are multiple recipients records
for the same values of message_id and employee_id.

I don't know whether that can happen in the poster's application, and
whether it would cause trouble if the result line would occur multiple
times.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is is possible to update a column based on a REGEXP on another column?

2011-01-24 Thread Johan De Meersman
If the bracketed stuff really can be anything, you're better off doing it
externally, I guess. If you can be reasonably sure that there'll not be any
square brackets in there, you can fluff about with instr() and substr().


On Sat, Jan 22, 2011 at 6:18 PM, Eric Bergen eric.ber...@gmail.com wrote:

 There isn't a built in way but you can try
 http://www.mysqludf.org/lib_mysqludf_preg/

 I would go with the php/perl script if this is a one time thing.

 -Eric

 On Jan 21, 2011, at 11:31 AM, Phil pchap...@nc.rr.com wrote:

  I have a table which contains a username column which may be constructed
  something like
 
  somename[A] or [DDD]someothername
 
  The A or DDD can be anything at all.
 
  I've added a new column to the table to which I'd like to populate with
 the
  value within the square brackets.
 
  I could write something in perl or php to run through each and update
 them
  but was wondering if there is a way to do this within mysql itself?  The
  regexp only returns a boolean so I can't see how to use that.
 
  Regards
 
  Phil
 
 
  --
  Distributed Computing stats
  http://stats.free-dc.org

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Mysql search query ignoring dots

2011-01-24 Thread Barbara Picci

Hi all,

I have to perform a mysql query in a table with millions of records.
I've full-text indexed my search field and I'm searching with MATCH AGAINST.
But there is a problem. In this field there are company names that 
contain dots, for istance I've PO.SE. srl and I want to find it if 
the user search for: POSE or PO.SE or P.O.S.E. etc.
I googled in the web but I don't find any solution. I don't want to 
add a new field with the cleaned version of my string because I would 
like to solve with the query and I prefer that the mysql table not 
become too big. But if I will not find a different solution, I will 
use this escamotage.
I've find a post that is similar but the solution don't seem to solve 
my situation.

You can see it at the url:
http://forums.mysql.com/read.php?10,395557,395584#msg-395584
In my case replace(email, '.', '') = replace(theSearchValue, '.', '');
is indifferent and don't change my results.

My query, searching POSE, is:

select aziende.* from aziende where 10 AND 
(MATCH(aziende.ragione_sociale) AGAINST('+POSE' IN BOOLEAN MODE) OR 
(replace(aziende.ragione_sociale, '.', '') = replace('POSE', '.', 
'')) order by aziende.ragione_sociale limit 0, 10


The alternative choice could be REGEXP but I've red that it make my 
query slow in a table of millions of records and I don't know how to 
exclude dots in the regular expression.


Can anyone help me?

Thanks in advance.
Barbara

--

Barbara Picci
Micro srl
viale Marconi 222, 09131 Cagliari  - tel. (+39) 070400240
http://www.microsrl.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-24 Thread Jerry Schwartz
I'll have to investigate how to do a transaction from Access. I guess 
pass-through queries might do it, but I'm not sure.

-Original Message-
From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
Sent: Sunday, January 23, 2011 9:36 AM
To: Jerry Schwartz
Cc: 'mysql.'
Subject: Re: CURRENT insert ID

Seeing from later posts that you're using InnoDB, why don't you simply
wrap the INSERT/UPDATE into a transaction? You don't avoid the UPDATE,
but I'm not sure I understand the need to mess w/ triggers.

BEGIN
INSERT INTO t(id) NULL
UPDATE t SET xxx=last_insert_id()
COMMIT

[JS] I'll have to investigate how to do a transaction from Access. I guess 
pass-through queries might do it, but I'm not sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



Best,

/ Carsten

Den 21-01-2011 17:41, Jerry Schwartz skrev:
 Here it is in a nutshell:



 I have a field that needs to be set equal to the auto-increment ID as a
record is entered. I don't know how to do this without a subsequent UPDATE
(which I can do with a trigger). Is there any way to avoid the cost of an
UPDATE?



 Here's a more concrete description of the problem:



 CREATE TABLE t (

 id INT(11) AUTO-INCREMENT PRIMARY,

 xxx INT(11)

 );



 When a record is added to table `t`, I need to set `xxx` to the value
generated for `id`. (`xxx` might be changed later.)



 Is there anything clever I can do?



 Regards,



 Jerry Schwartz

 Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341

 E-mail:mailto:je...@gii.co.jp  je...@gii.co.jp

 Web site:http://www.the-infoshop.com/  www.the-infoshop.com








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-24 Thread Jerry Schwartz
-Original Message-
From: Donovan Brooke [mailto:li...@euca.us]
Sent: Friday, January 21, 2011 7:28 PM
Cc: mysql@lists.mysql.com
Subject: Re: CURRENT insert ID

Just an idear..

Don't auto_increment the main table.. create a unique Id table,
auto_increment that, and grab that value first for use with both fields
in your main table.

[JS] I've thought of that, but it creates another problem.

Let's say I add a record to the ID table, thereby auto-incrementing its key. 
Now I need to retrieve that key value. How do I do that while retaining some 
semblance of data integrity? I'd have to do something like SELECT MAX(), 
which fails to retrieve my value if someone else has inserted a record in 
the meantime.

I don't, from Access, have the ability to throw a lock on the table (so far as 
I know). I guess maybe I could do that with pass-through queries, but I'm not 
sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




Donovan


--
D Brooke

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: CURRENT insert ID

2011-01-24 Thread Mark Goodge

On 24/01/2011 15:42, Jerry Schwartz wrote:

-Original Message-
From: Donovan Brooke [mailto:li...@euca.us]
Sent: Friday, January 21, 2011 7:28 PM
Cc: mysql@lists.mysql.com
Subject: Re: CURRENT insert ID

Just an idear..

Don't auto_increment the main table.. create a unique Id table,
auto_increment that, and grab that value first for use with both fields
in your main table.


[JS] I've thought of that, but it creates another problem.

Let's say I add a record to the ID table, thereby auto-incrementing its key.
Now I need to retrieve that key value. How do I do that while retaining some
semblance of data integrity? I'd have to do something like SELECT MAX(),
which fails to retrieve my value if someone else has inserted a record in
the meantime.


That's what LAST_INSERT_ID() is for:

http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id

This is on a per-connection basis, so even if another connection inserts 
a line in the meantime your query will return the auto-increment value 
of the line you inserted.


Most programming languages with an interface to MySQL, either built-in 
or via a module, implement this natively. For example, in PHP:


mysql_query(insert into mytable set name = 'foo');
$id = mysql_insert_id();

the value of $id will be the auto-increment number from the line you 
just inserted.


Mark
--
http://mark.goodge.co.uk
http://www.ratemysupermarket.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: CURRENT insert ID

2011-01-24 Thread Jaime Crespo Rincón
2011/1/21 Jerry Schwartz je...@gii.co.jp:
-Original Message-
From: João Cândido de Souza Neto [mailto:j...@consultorweb.cnt.br]
Sent: Friday, January 21, 2011 12:47 PM
To: mysql@lists.mysql.com
Subject: Re: CURRENT insert ID

Ok, you must have your own reasons to do that.

The fact is: You can´t set the auto_incremente value field to another field
in the same table and record even in a trigger.

So, the best way is a second update.

 [JS] That's what I'm trying to do now, but I'm using MS Access through an ODBC
 connection and I haven't figured out how to retrieve last_insert_id.

I will tell you a secret. But shh. Do not tell anyone:

--8--8--8--8--8--8--8--8--8--8--8--
mysql create table mytable(id int auto_increment primary key, name
varchar(255));
Query OK, 0 rows affected (0.07 sec)

mysql insert into mytable (name) values ('test data');
Query OK, 1 row affected (0.00 sec)

mysql select id from mytable where id is null;  -- OMG!!!
++
| id |
++
|  1 |
++
1 row in set (0.00 sec)
--8--8--8--8--8--8--8--8--8--8--8--


-- 
Jaime Crespo
MySQL  Java Instructor
Software Developer
Warp Networks
http://warp.es

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


In my application, there CAN in fact be several recipients records with
both the same message_id foreign key value AND the same employee_id
value (some employees may be a recipient of a message several times over via
alternative addresses and/or aliases). However, as I rework things, I could
probably rework application logic nuke the GROUP BY and just cope, in code,
with these extra messages records in my result set. (Just FYI, the SQL
query is simply the default query as created by rails or, more specifically,
ActiveRecord 2.3.9 which I can/will-be optimizing).

I will additionally be moving this database to a new server. However, for
academic interest, I'll see if I can make time to post the query time(s)
once I change the app, before moving the database to a new (and better
configured) server.

Thanks for the help everybody.



 Regards,
 Jörg


-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford zettab...@gmail.comwrote:



 On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


 In my application, there CAN in fact be several recipients records with
 both the same message_id foreign key value AND the same employee_id
 value (some employees may be a recipient of a message several times over via
 alternative addresses and/or aliases). However, as I rework things, I could
 probably rework application logic nuke the GROUP BY and just cope, in code,
 with these extra messages records in my result set. (Just FYI, the SQL
 query is simply the default query as created by rails or, more specifically,
 ActiveRecord 2.3.9 which I can/will-be optimizing).

 I will additionally be moving this database to a new server. However, for
 academic interest, I'll see if I can make time to post the query time(s)
 once I change the app, before moving the database to a new (and better
 configured) server.


Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

-- 
Kendall Gifford
zettab...@gmail.com


RE: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Gavin Towey
If you show the EXPLAIN SELECT .. output, and the table structure, someone will 
be able to give a more definite answer.

-Original Message-
From: Kendall Gifford [mailto:zettab...@gmail.com]
Sent: Monday, January 24, 2011 2:29 PM
To: mysql@lists.mysql.com
Subject: Re: Slow query on MySQL4 server doing simple inner join of two InnoDB 
tables

On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford zettab...@gmail.comwrote:



 On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


 In my application, there CAN in fact be several recipients records with
 both the same message_id foreign key value AND the same employee_id
 value (some employees may be a recipient of a message several times over via
 alternative addresses and/or aliases). However, as I rework things, I could
 probably rework application logic nuke the GROUP BY and just cope, in code,
 with these extra messages records in my result set. (Just FYI, the SQL
 query is simply the default query as created by rails or, more specifically,
 ActiveRecord 2.3.9 which I can/will-be optimizing).

 I will additionally be moving this database to a new server. However, for
 academic interest, I'll see if I can make time to post the query time(s)
 once I change the app, before moving the database to a new (and better
 configured) server.


Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

--
Kendall Gifford
zettab...@gmail.com

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



InnoDB and rsync

2011-01-24 Thread Robinson, Eric
Is there a way to safely backup an InnoDB database using rsync? 

Right now we have a very efficient and reliable way to backup 240+
separate instances of MySQL with MyISAM tables. The databases range in
size from .5GB to 16GB. During this time, users can still access the
system, so our customers can work 24x7. In the process, we also refresh
240+ slave instances with a perfect byte-for-byte replica of the master
databases. 

The whole thing takes about 30 minutes. 

Here's how we do it. 

Late at night when the number of users on the system is low, we do the
following for each of the 240+ instances of MySQL...

1. Shut down the slave and remove all replication-related log files.

2. Perform an rsync of the master's data directory to the slave. Users
may be making changes to tables during this rsync.

3. Issue a FLUSH TABLES WITH READ LOCK on the master followed by a RESET
MASTER.

4. Perform a second rsync of the data directory from the master to the
slave to copy any user changes that happened during step 2. This usually
completes in a few seconds, often less than 1. If any users were trying
to insert records at this exact moment, their application may appear to
pause very briefly.

5. Start the slave.

When I'm all done, I have 240+ slave servers in perfect sync with their
masters, each having a 100% identical binary copy of its master's
database. Since these copies are truly identical, they can be used for a
second layer of backup to other media.  

Like I said, the whole process takes about 30 minutes because the rsync
algorithm only copies the byte-level changes.

IS THERE ANY WAY TO SET UP SOMETHING THIS EASY AND EFFICIENT USING
INNODB?

I've been reading about InnoDB hot copy and other approaches, but none
of them seem to work as well as the approach I have worked out with
MyISAM. Unfortunately, my software wants to force us to switch to
InnoDB, so I'm really stuck. If we have to switch to InnoDB and we
cannot come up with a method for doing fast, rsync-style backups, it
will probably mean huge, costly, and unnecessary changes to our
infrastructure.

Any help will be GREATLY appreciated.

--
Eric Robinson


Disclaimer - January 24, 2011 
This email and any files transmitted with it are confidential and intended 
solely for mysql@lists.mysql.com. If you are not the named addressee you should 
not disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB and rsync

2011-01-24 Thread Johan De Meersman
I suspect the same trick might work with InnoDB (with pretty much the same
caveats), but you'd be best off setting innodb-file-per-table - I'm sure
you've already seen that the large datafiles are a hindrance to smooth
rsyncing :-)

Make sure to test extensively, though.


On Tue, Jan 25, 2011 at 5:37 AM, Robinson, Eric eric.robin...@psmnv.comwrote:

 Is there a way to safely backup an InnoDB database using rsync?

 Right now we have a very efficient and reliable way to backup 240+
 separate instances of MySQL with MyISAM tables. The databases range in
 size from .5GB to 16GB. During this time, users can still access the
 system, so our customers can work 24x7. In the process, we also refresh
 240+ slave instances with a perfect byte-for-byte replica of the master
 databases.

 The whole thing takes about 30 minutes.

 Here's how we do it.

 Late at night when the number of users on the system is low, we do the
 following for each of the 240+ instances of MySQL...

 1. Shut down the slave and remove all replication-related log files.

 2. Perform an rsync of the master's data directory to the slave. Users
 may be making changes to tables during this rsync.

 3. Issue a FLUSH TABLES WITH READ LOCK on the master followed by a RESET
 MASTER.

 4. Perform a second rsync of the data directory from the master to the
 slave to copy any user changes that happened during step 2. This usually
 completes in a few seconds, often less than 1. If any users were trying
 to insert records at this exact moment, their application may appear to
 pause very briefly.

 5. Start the slave.

 When I'm all done, I have 240+ slave servers in perfect sync with their
 masters, each having a 100% identical binary copy of its master's
 database. Since these copies are truly identical, they can be used for a
 second layer of backup to other media.

 Like I said, the whole process takes about 30 minutes because the rsync
 algorithm only copies the byte-level changes.

 IS THERE ANY WAY TO SET UP SOMETHING THIS EASY AND EFFICIENT USING
 INNODB?

 I've been reading about InnoDB hot copy and other approaches, but none
 of them seem to work as well as the approach I have worked out with
 MyISAM. Unfortunately, my software wants to force us to switch to
 InnoDB, so I'm really stuck. If we have to switch to InnoDB and we
 cannot come up with a method for doing fast, rsync-style backups, it
 will probably mean huge, costly, and unnecessary changes to our
 infrastructure.

 Any help will be GREATLY appreciated.

 --
 Eric Robinson


 Disclaimer - January 24, 2011
 This email and any files transmitted with it are confidential and intended
 solely for mysql@lists.mysql.com. If you are not the named addressee you
 should not disseminate, distribute, copy or alter this email. Any views or
 opinions presented in this email are solely those of the author and might
 not represent those of Physicians' Managed Care or Physician Select
 Management. Warning: Although Physicians' Managed Care or Physician Select
 Management has taken reasonable precautions to ensure no viruses are present
 in this email, the company cannot accept responsibility for any loss or
 damage arising from the use of this email or attachments.
 This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel