since you're comfortable with rolling your own 
tools, here are some observations about the ado 
approach, and which could be applied to odbc as 
well.

i'm running an oracle enterprise server in a test 
environment for corereader, and i've noticed 
that, although oracle sometimes takes a while to 
wake up, after you have its attention, it throws 
data at you very fast.  sometimes a developer 
does not use connections properly.  in your case, 
i would create a single connection and keep it 
open for the duration of the 45 million record 
move.

after the connection is open, i would not move 
records.  instead, i would ask oracle for the 
biggest record set that the infrastructure can 
handle.  it will come back to you very fast.  
then i would do the inserts into mysql.  when 
that data set is done, i would retrieve another. 

i would log the operation.  i always expect a an 
operation of that size to overwhelm the 
infrastructure, so i would keep a running check 
of where the operation is.  i would write that 
log into a local disk file.  if the process 
crashes, you pick up from where it went down.

be sure that oracle's nolog is set for every 
table.  in fact, turn off all logging everywhere 
except for your app's log.  also, insure that 
there are no indexes in the mysql side until 
you're finished.  i'm still becoming familiar 
with mysql, but insure that there's no 
transaction logging going on.

glad to hear that you had no errors before, but 
be careful of oracle's data typing.

if you have the resources, copy the app to 
additional boxes.  run all of them simultaneously 
against the servers.  they'll bump into each 
other, but there's enough latency in all the 
systems to let them all operate.  (hopefully, 
you're running on at least a hundred meg network 
for this.)  for your operation, it would be worth 
it to borrow five or ten machines from the rest 
of the organization for a couple of days.

if you run multiple apps, increase the query 
timeout of all of the connections.

that's the way that i would do it.


> Well, This is my first attempt at moving from Oracle to MySQL on such a
> large scale and so far it isn't going well at all. We are running oracle
> 7.3.4.0.1 and MySQL 4.0.4-0 standard. We are moving 101 tables with a total
> of 45 millon records over. I have tried several diffrent methods including
> using a MS-SQL server to act as go between using DTS services. We have run
> into two problems, one, we don't get all the data over we loose rows and the
> like. Two it is super slow on the order of 18 to 30 hours to do a pull. We
> need a way to move the data in a resonable amount of time around 8 hours is
> the goal. Things I have tried so far include using MS-SQL as a go between
> fast but with data loss, using a custom ADO application to move the data and
> handle any transforms we get the data but it REAL slow, and trying to get
> the oracledump.pl perl script to run with no success. If anyone has had a
> chance to do this type of migration please point me in the right direction.
> I would really hate to stay with Oracle for the only reason that we can't
> get the data off of it and on to another box :)
> 
> Thanks,
> Wes
> 
> 



-- 
        John Ragan
        [EMAIL PROTECTED]
        http://www.CoreReader.com/ 


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to