Re: best way to copy a innodb table

2013-07-02 Thread Arjun
Well, the easy way to chunk the inserts is by use of limit. Here is what I 
used for one of my projects:

Insert ignore into t1 (f1, f2, f3)
Select f1, f2, f3 from t2 limit 100, 100

Inserts 1M records at a time starting from 1M th record in t2 and you can keep 
incrementing this offset as you progress. This will help in monitoring the 
table inserts and at the same time move chunks of records from source table.

Enjoy! 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: best way to copy a innodb table

2013-07-02 Thread Michael Dykman
Another technique to avoid impact to the source database is to create your
target as MyISAM, pump your records into that (no ACID overhead) and at the
end :

ALTER mytable engine=InnoDb

The alter can take awhile but it will impose no strain on the source server
at all.


On Tue, Jul 2, 2013 at 3:48 AM, Arjun na...@yahoo.com wrote:

 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:

 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100

 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep
 incrementing this offset as you progress. This will help in monitoring the
 table inserts and at the same time move chunks of records from source
 table.

 Enjoy!


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: best way to copy a innodb table

2013-07-02 Thread Perrin Harkins
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com
 wrote:

 You should check pt-archiver.


+1.  It works very well for this type of job.

- Perrin


RE: best way to copy a innodb table

2013-07-02 Thread Rick James
The particular example given here is unsafe and slow.

* Without an ORDER BY, you are not guaranteed that the chunks will be distinct.

* If there are any INSERTs/DELETEs between chunk copies, you will get 
dups/missing rows for two reasons:  the inserted/deleted rows, and the OFFSET 
is not quite right.

* OFFSET requires walking over the skipped rows.  As you get farther into the 
table, this takes longer.  That is, you have an ORDER(N**2) operation, not what 
could be ORDER(N).

* If replication is involved, 1M rows is a lot -- there will be noticeable 
delays where other replication activity is stalled.

If you have an AUTO_INCREMENT PRIMARY KEY, then using WHERE id  100 AND id 
= 200 is a better approach -- Order(N), and chunks guaranteed to be 
distinct.  Still, it is not immune from INSERTs/DELETEs.  Replication is fixed 
by decreasing chunk size (and by avoiding OFFSET).

 -Original Message-
 From: Arjun [mailto:na...@yahoo.com]
 Sent: Tuesday, July 02, 2013 12:48 AM
 To: mysql@lists.mysql.com
 Subject: Re: best way to copy a innodb table
 
 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:
 
 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100
 
 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep incrementing this offset as you progress. This will help in
 monitoring the table inserts and at the same time move chunks of records
 from source table.
 
 Enjoy!
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: best way to copy a innodb table

2011-12-01 Thread Claudio Nanni
Sure you can, and you should.

but in case you also update/delete rows from the first table you have to
set up trigger to log changes.
if you are lucky (only inserts) then its easier.

Cheers

Claudio

2011/12/1 Angela liu yyll2...@yahoo.com

 Hi, folks:


 I have a situation:

 A large innodb table t1 with 45 million rows, need to have a new table t2
 exactly the same as t1, to copy the data from t1 to t2, I have the
 following query:

 create table t2 like t1;

 insert into t2 select * from t1;


 but the above insert may run long time , that can interface with
 performance, is there a way to chunk the insert into the new table? like
 breaking it down into chunks of 100,000 rows ?


 Thanks




-- 
Claudio


Re: best way to copy a innodb table

2011-12-01 Thread Angela liu
how to break the table into 100,000 chunks?

thanks



 From: Claudio Nanni claudio.na...@gmail.com
To: Angela liu yyll2...@yahoo.com 
Cc: mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, December 1, 2011 2:24 PM
Subject: Re: best way to copy a innodb table
 
Sure you can, and you should.

but in case you also update/delete rows from the first table you have to
set up trigger to log changes.
if you are lucky (only inserts) then its easier.

Cheers

Claudio

2011/12/1 Angela liu yyll2...@yahoo.com

 Hi, folks:


 I have a situation:

 A large innodb table t1 with 45 million rows, need to have a new table t2
 exactly the same as t1, to copy the data from t1 to t2, I have the
 following query:

 create table t2 like t1;

 insert into t2 select * from t1;


 but the above insert may run long time , that can interface with
 performance, is there a way to chunk the insert into the new table? like
 breaking it down into chunks of 100,000 rows ?


 Thanks




-- 
Claudio

Re: best way to copy a innodb table

2011-12-01 Thread Miguel Angel Nieto
Hi,

You should check pt-archiver.

http://www.percona.com/doc/percona-toolkit/pt-archiver.html

The goal is a low-impact, forward-only job to nibble old data out of the table 
without impacting OLTP queries much.

This tool can copy data and insert in another table in chunks of data. The 
destination doesn't need to be on the same database or on the same server. Take 
in account that this tool by default deletes data from the source table, so use 
--no-delete option.

Try it on a testing environment before executing it in production :)

Regards,
-- 
Miguel Ángel Nieto,
Support Engineer, Percona Inc.
http://www.percona.com | http://www.mysqlperformanceblog.com
Email : miguel.ni...@percona.com
Skype : percona.migueln

24/7 Emergency : +1 888 401 3401 ext 911
Training : http://www.percona.com/training/
Support : http://www.percona.com/mysql-support/

Percona Live MySQL Conference April 10-12 Santa Clara
http://www.percona.com/live/mysql-conference-2012/

El 01/12/2011, a las 23:16, Angela liu escribió:

 Hi, folks:
 
 
 I have a situation:
 
 A large innodb table t1 with 45 million rows, need to have a new table t2 
 exactly the same as t1, to copy the data from t1 to t2, I have the following 
 query:
 
 create table t2 like t1;
 
 insert into t2 select * from t1;
 
 
 but the above insert may run long time , that can interface with performance, 
 is there a way to chunk the insert into the new table? like breaking it down 
 into chunks of 100,000 rows ?
 
 
 Thanks



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: best way to copy a innodb table

2011-12-01 Thread Karen Abgarian
Hi, 

I have a support case with MySQL opened on this subject.   Here is what we were 
able to come up with.  

1.  Create the table with the primary key and unique key constraints defined 
but no secondary indexes. 
2.  Bump up InnoDB logs to 2M and especially memory to the highest there can 
be. 
3.  Disable binary logging, InnoDB sync if running that. 
4.  Split the job into several insert..select statements using some criteria 
dependent on the table. 
5.  Run the load in parallel. 
6.  Create secondary indexes via fast create option. 

My experience is that the inserts run at blazing speed until the table roughly 
becomes as big as there size of the Innodb pool. 
The inserts then slow down gradually to like 100 inserts/second.  The net 
result is that the 25M of records can, say, be loaded 
in an hour, and the remaining, say 25M of records would be loaded in, like, 24 
hours.  

If anyone can shed some light on it that would be great.   I observe no 
bottlenecks anywhere on the OS.  CPU is low, no paging,
no significant disk activity.   

Tx
Karen.




On Dec 1, 2011, at 2:16 PM, Angela liu wrote:

 Hi, folks:
 
 
 I have a situation:
 
 A large innodb table t1 with 45 million rows, need to have a new table t2 
 exactly the same as t1, to copy the data from t1 to t2, I have the following 
 query:
 
 create table t2 like t1;
 
 insert into t2 select * from t1;
 
 
 but the above insert may run long time , that can interface with performance, 
 is there a way to chunk the insert into the new table? like breaking it down 
 into chunks of 100,000 rows ?
 
 
 Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql