oiding OFFSET).
> -Original Message-
> From: Arjun [mailto:na...@yahoo.com]
> Sent: Tuesday, July 02, 2013 12:48 AM
> To: mysql@lists.mysql.com
> Subject: Re: best way to copy a innodb table
>
> Well, the easy way to chunk the inserts is by use of limit. Here is what I
&g
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto wrote:
> You should check pt-archiver.
+1. It works very well for this type of job.
- Perrin
Another technique to avoid impact to the source database is to create your
target as MyISAM, pump your records into that (no ACID overhead) and at the
end :
ALTER mytable engine=InnoDb
The alter can take awhile but it will impose no strain on the source server
at all.
On Tue, Jul 2, 2013 at
Well, the easy way to chunk the inserts is by use of limit. Here is what I
used for one of my projects:
Insert ignore into t1 (f1, f2, f3)
Select f1, f2, f3 from t2 limit 100, 100
Inserts 1M records at a time starting from 1M th record in t2 and you can keep
incrementing this offset as
Hi,
I have a support case with MySQL opened on this subject. Here is what we were
able to come up with.
1. Create the table with the primary key and unique key constraints defined
but no secondary indexes.
2. Bump up InnoDB logs to 2M and especially memory to the highest there can
be.
Hi,
You should check pt-archiver.
http://www.percona.com/doc/percona-toolkit/pt-archiver.html
"The goal is a low-impact, forward-only job to nibble old data out of the table
without impacting OLTP queries much."
This tool can copy data and insert in another table in chunks of data. The
destin
how to break the table into 100,000 chunks?
thanks
From: Claudio Nanni
To: Angela liu
Cc: "mysql@lists.mysql.com"
Sent: Thursday, December 1, 2011 2:24 PM
Subject: Re: best way to copy a innodb table
Sure you can, and you should.
but in cas
Sure you can, and you should.
but in case you also update/delete rows from the first table you have to
set up trigger to log changes.
if you are lucky (only inserts) then its easier.
Cheers
Claudio
2011/12/1 Angela liu
> Hi, folks:
>
>
> I have a situation:
>
> A large innodb table t1 with 45