Hello.


For MyISAM tables you may want to use ALTER TABLE ... DISABLE KEYS,

ALTER TABLE ... ENABLE KEYS as described in the manual:

  http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html



Increasing of the bulk_insert_buffer_size can help as well.







"George Law" <[EMAIL PROTECTED]> wrote:



>

>List:  MySQL General Discussion        « Previous MessageNext Message »

>From:  George Law      Date:   January 6 2006 11:01pm

>Subject:       Sorry for my n00bie question - mysql 5.0.18 with load data 
>infile

>Get Plain Text  

>

>Hi All,

>Just wanted to apologize for my earlier rambling emails.

>I am been working on recoving from a innodb corruption for 2 days now

>and was really in panic mode.

> 

>Everything seems to be running well with 5.0.18, althought my server

>load it up there.

>I am running 5.0.18 on a Suse 9.3 box, P4 3 GHZ, 4 GB RAM

> 

> 

> 16:38:57 up 86 days,  7:20,  4 users,  load average: 4.44, 4.41, 4.51

>USER     TTY        LOGIN@   IDLE   JCPU   PCPU WHAT

>root     tty1      01Nov05 66days  0.23s  0.23s -bash

> 

>this has been pretty much a sustained all day long.

> 

> 

>I have a perl script that takes some text cdrs, reformats them into .csv

>files, and then slams then into mysql using 'load data local'.

>Every 10 minutes, I process the raw cdrs, separate the completes and

>incompletes, generate 3 .csv files, and do 3 'load data local' commands.

>completed cdrs go into a innodb table, incompletes and raw cdrs go into

>a myISAM table. 

> 

> 

>The tables are fixed length - no varchars.  everything is forced to the

>full char length in my perl, so my 'load data' generates no warnings on

>truncated fields.

> 

>The reason I did it this way was the incomp cdrs and raw cdrs are all

>"read only".  The completed cdrs have fields that get updated at a later

>time.

>There seemed to be a lot of processes out there waiting on table locks

>when this was a myisam table.

> 

>The performance of the 'load data infile' command seems like it is

>taking way too long.

> 

>(dates are just from `date` in perl):

>2006-01-06 16:35:21 : begin import into comp_cdr

>2006-01-06 16:35:42 : end import into comp_cdr (1361) (0)(0)(0)

> 

>innodb table: 65 inserts per second.

>

>2006-01-06 16:35:42 : begin import into incomp_cdr

>2006-01-06 16:39:10 : import into incomp_cdr2 (15084) (0)(0)(0)

>Processed 16445 CDRs in 429.60 seconds - 38.28 cdr/sec 1361 complete

>calls, 15084 incomplete calls

>

>the myisam table takes longer... 75 inserts a second.

> 

> 

>My understanding is that 'load data' should be considerably faster than

>doing one big loop in perl and doing inserts.  I haven't went back to

>perl for this, but that might be my next step.

> 

>anyone have any suggestions.  I am not posting any table descriptions

>here, but will do if needed.

>--

> 

> 

>

>George Law

>VoIP Network Developer

>864-678-3161

>[EMAIL PROTECTED]

>MSN: [EMAIL PROTECTED]

>

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___     ___ ____  __
  /  |/  /_ __/ __/ __ \/ /    Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
       <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to