> i'm running an oracle enterprise server in a test
> environment for corereader, and i've noticed
> that, although oracle sometimes takes a while to
> wake up, after you have its attention, it throws
> data at you very fast.  sometimes a developer
> does not use connections properly.  in your case,
> i would create a single connection and keep it
> open for the duration of the 45 million record
> move.

Currently I open a connection and keep it open during the table move only.
After the whole table is moved it destroys the object and checks to see if
there is another table running. I wrote the app to spawn up to 10 clients
one pIII550 w/256meg ram can handle 2 clients due to the large overhead.
What I am seeing is on the very large tables, we have really three or four
tables that make up the bulk of all the data, the connection eventually
times out or has an error if the server has any kind of other load on it.

> records.  instead, i would ask oracle for the
> biggest record set that the infrastructure can
> handle.  it will come back to you very fast.

The problem is the production machine is old and weak I had them beef it up
to two whole gigs of ram this thing at idle sits at a 2 load rating or
better.

> log into a local disk file.  if the process
> crashes, you pick up from where it went down.

I have written some error checking into the app including error logging but
I don't want to spend another week writing an app just to move data and
test. The load will only go as fast as the largest table in the Oracle
database with 10 loaders the other tables get chewed through pretty quick

> transaction logging going on.

There is no indexes on the mysql box and no logging of selects or the like
on the oracle side.

> glad to hear that you had no errors before, but
> be careful of oracle's data typing.

Thats part of what makes the app slow I have very strict data typing and
conversions happening on very large text fields.

> additional boxes.  run all of them simultaneously
> against the servers.  they'll bump into each
> if you run multiple apps, increase the query
> timeout of all of the connections.
>
> that's the way that i would do it.
>

Lol, that is the way I did it.

I may finish the app in general it will move data from ms-sql, oracle, and
my-sql into ms-sql, mysql, or a flat file. I though about setting up a
couple of REAL beefy boxes with a couple of gig of ram a piece and have them
store the recordset in a disconnected method, so once oracle is done tossing
records its out of the loop completly.

Right now using LOAD has been by far the fastest method multiple dataloaders
has only yeilded about 2000 records a second and LOAD does almost 12k even
on the big big tables it doesn't slow down to below 5 or 6k.

Cheers,
Wes


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to