>-----Original Message-----
>From: Gavin Towey [mailto:gto...@ffn.com]
>Sent: Monday, January 03, 2011 1:47 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have 
>a
>file with two columns, and a table with three.  You will probably need to be
>more specific about which columns map to which fields in the file.  Please
>report the error with any commands you run.
>
[JS] I gave this information in a new thread that I started, "mysqlimport 
doesn't work for me."

>Also, most importantly, how slow is "slow?"  Have you measured the import 
>speed
>in terms of rows per second?
>
[JS] Let's just say you could use tree rings as a timer.

>The largest factor I have found that influences overall import speed is the
>innodb_buffer_pool_size.  Make sure you're not running with the default size.
>A buffer pool that's large enough to contain the secondary indexes on the 
>table
>will also help a lot.
>
[JS] I haven't done any tuning. I expected to do that later, when the 
application went live. (Don't worry about that.)

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>
>-----Original Message-----
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Monday, January 03, 2011 7:56 AM
>To: 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Okay, I have a confession to make: I have never gotten Load Data Infile or
>mysqlimport to work.
>
>Here's my CSV file, named "t_dmu_history.txt":
>
>13071,299519
>13071,299520
>13071,299521
>13071,299522
>13071,299524
>13071,299526
>13071,299527
>...
>
>Here's my mysqlimport command:
>
>mysqlimport -uaccess -pxxx --delete --
>columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> --local --silent
>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>maintable_usa t_dmu_history.txt
>
>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
>Win32 (ia32)"
>
>It runs for awhile, but I wind up with only one record:
>
>localhost >select * from t_dmu_history;
>+------------------+------------------+------------------------+
>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>+------------------+------------------+------------------------+
>|                1 |            13071 |                   NULL |
>+------------------+------------------+------------------------+
>1 row in set (0.00 sec)
>
>Obviously mysqlimport is parsing the input file incorrectly, but I don't know
>why.
>
>Here's the table itself:
>
>+------------------------+---------+------+-----+---------+----------------+
>| Field                  | Type    | Null | Key | Default | Extra          |
>+------------------------+---------+------+-----+---------+----------------+
>| t_dmu_history_id       | int(11) | NO   | PRI | NULL    | auto_increment |
>| DM_History_DM_ID       | int(11) | YES  | MUL | NULL    |                |
>| DM_History_Customer_ID | int(11) | YES  | MUL | NULL    |                |
>+------------------------+---------+------+-----+---------+----------------+
>
>       Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
>  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>  `DM_History_DM_ID` int(11) DEFAULT NULL,
>  `DM_History_Customer_ID` int(11) DEFAULT NULL,
>  PRIMARY KEY (`t_dmu_history_id`),
>  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: je...@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>>-----Original Message-----
>>From: mos [mailto:mo...@fastmail.fm]
>>Sent: Sunday, January 02, 2011 11:42 PM
>>To: mysql@lists.mysql.com
>>Subject: Re: This just seems to slow
>>
>>Jerry,
>>       Use "Load Data Infile" when loading a lot of data. Whoever is giving
>>you the data should be able to dump it to a CSV file. Your imports will be
>>much faster.
>>
>>Mike
>>
>>At 07:51 PM 1/2/2011, you wrote:
>>>I'm trying to load data into a simple table, and it is taking many hours
>>>(and
>>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>>should have this big an effect.
>>>
>>>=====
>>>us-gii >show create table t_dmu_history\G
>>>*************************** 1. row ***************************
>>>        Table: t_dmu_history
>>>Create Table: CREATE TABLE `t_dmu_history` (
>>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>>   PRIMARY KEY (`t_dmu_history_id`),
>>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>>=====
>>>
>>>Here's a snip of what the input file looks like:
>>>=====
>>>SET autocommit=1;
>>>
>>>#
>>># Dumping data for table 'T_DMU_History'
>>>#
>>>
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299519);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299520);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299521);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299522);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299524);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299526);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299527);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299528);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299529);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299531);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299532);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299533);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299534);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299535);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298880);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298881);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298882);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298883);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298884);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298885);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298886);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298887);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298889);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298890);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298891);
>>>=====
>>>
>>>There are about 870000 records.
>>>
>>>I realize that using one INSERT per row is going to hurt, but I don't
>>>control
>>>the format of the incoming data.
>>>
>>>Besides, I'd have thought this would be pretty quick regardless of how
>>>clumsy
>>>the method was.
>>>
>>>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>>>I'm not worried about ACID.
>>>
>>>Any suggestions?
>>>
>>>
>>>
>>>
>>>
>>>Regards,
>>>
>>>Jerry Schwartz
>>>Global Information Incorporated
>>>195 Farmington Ave.
>>>Farmington, CT 06032
>>>
>>>860.674.8796 / FAX: 860.674.8341
>>>E-mail: je...@gii.co.jp
>>>Web site: www.the-infoshop.com
>>>
>>>
>>>
>>>
>>>
>>>--
>>>MySQL General Mailing List
>>>For list archives: http://lists.mysql.com/mysql
>>>To unsubscribe:    http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe:    http://lists.mysql.com/mysql?unsub=je...@gii.co.jp
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:    http://lists.mysql.com/mysql?unsub=gto...@ffn.com
>
>
>IMPORTANT: This email message is intended only for the use of the individual 
>to
>whom, or entity to which, it is addressed and may contain information that is
>privileged, confidential and exempt from disclosure under applicable law. If
>you are NOT the intended recipient, you are hereby notified that any use,
>dissemination, distribution or copying of this communication is strictly
>prohibited.  If you have received this communication in error, please reply 
>to
>the sender immediately and permanently delete this email. Thank you.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to