Hello people,

I'm having a "strange" behavior when rebuilding and querying a 58million
row table. Probably because I'm new to mysql. I'm using it through java
btw.

My table has three columns: 2 mediumint and 1 smallint. The first two
columns are the primary key.

My first test was to write 58million rows with random generated values
and create a unique index afterwards. That all went perfect. With a
surprising performance of around 100 seconds to read 280,000 rows for my
unit tests. It even went down to 38 when caching kicked in.

The weird thing starts when I use live data I get from an Oracle
database. (it's in UTF8 if that's important.) I'm querying the data from
the
Oracle database and writes it back into the mySQL database. That goes
well too. But creating the unique index doesn't complete. the command
runs
endlessly. Writing the live data to a file and using "load data
infile..." as described in the manual goes fast and enabling the primary
key afterwards does help. The index rebuild completes in a few minutes.
But then the query takes forever. Loading the live data with an
active primary key also takes forever.

Table and index file sizes are identical between those of the test data
and those of the live data.

My only conclusion is that getting the data first from another database
does something to mySQL... 

But what is it ? And how can I load live data and keep the performance I
got initially ?

Is there somebody in here who knows the obvious thing I am missing ?

Any help is more then welcome !

Kind Regards,

Jan.

Reply via email to