Hi,

You should check pt-archiver.

http://www.percona.com/doc/percona-toolkit/pt-archiver.html

"The goal is a low-impact, forward-only job to nibble old data out of the table 
without impacting OLTP queries much."

This tool can copy data and insert in another table in chunks of data. The 
destination doesn't need to be on the same database or on the same server. Take 
in account that this tool by default deletes data from the source table, so use 
--no-delete option.

Try it on a testing environment before executing it in production :)

Regards,
-- 
Miguel Ángel Nieto,
Support Engineer, Percona Inc.
http://www.percona.com | http://www.mysqlperformanceblog.com
Email : miguel.ni...@percona.com
Skype : percona.migueln

24/7 Emergency : +1 888 401 3401 ext 911
Training : http://www.percona.com/training/
Support : http://www.percona.com/mysql-support/

Percona Live MySQL Conference April 10-12 Santa Clara
http://www.percona.com/live/mysql-conference-2012/

El 01/12/2011, a las 23:16, Angela liu escribió:

> Hi, folks:
> 
> 
> I have a situation:
> 
> A large innodb table t1 with 45 million rows, need to have a new table t2 
> exactly the same as t1, to copy the data from t1 to t2, I have the following 
> query:
> 
> create table t2 like t1;
> 
> insert into t2 select * from t1;
> 
> 
> but the above insert may run long time , that can interface with performance, 
> is there a way to chunk the insert into the new table? like breaking it down 
> into chunks of 100,000 rows ?
> 
> 
> Thanks

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to