>-----Original Message-----
>From: Wagner Bianchi [mailto:wagnerbianch...@gmail.com]
>Sent: Monday, January 03, 2011 4:42 AM
>To: Daevid Vincent
>Cc: mysql@lists.mysql.com; mos
>Subject: Re: This just seems to slow
>
>Multiple line insert is the better choice...it will be organized in
>transaction blocks of many lines and it will speed up data insertion.
>
>[bian...@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump
>
[JS] If only I were using mysqldump :-(.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>-e: extended-inserts
>
>Best regards.
>--
>Wagner Bianchi
>
>2011/1/3 Daevid Vincent <dae...@daevid.com>
>
>> Another option would be to mangle your insert statement with some other
>> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>> instead. Something like:
>>
>> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> VALUES (13071, 299519),
>> VALUES (13071, 299520),
>> VALUES (13071, 299521),
>> ...
>>
>> That will radically speed up the inserts.
>>
>> Also delete your INDEX / KEYs and add them at the very end instead.
>>
>> -----Original Message-----
>> From: mos [mailto:mo...@fastmail.fm]
>> Sent: Sunday, January 02, 2011 8:42 PM
>> To: mysql@lists.mysql.com
>> Subject: Re: This just seems to slow
>>
>> Jerry,
>>       Use "Load Data Infile" when loading a lot of data. Whoever is giving
>> you the data should be able to dump it to a CSV file. Your imports will be
>> much faster.
>>
>> Mike
>>
>> At 07:51 PM 1/2/2011, you wrote:
>> >I'm trying to load data into a simple table, and it is taking many hours
>> (and
>> >still not done). I know hardware, etc., can have a big effect, but NOTHING
>> >should have this big an effect.
>> >
>> >=====
>> >us-gii >show create table t_dmu_history\G
>> >*************************** 1. row ***************************
>> >        Table: t_dmu_history
>> >Create Table: CREATE TABLE `t_dmu_history` (
>> >   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >   `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >   PRIMARY KEY (`t_dmu_history_id`),
>> >   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >=====
>> >
>> >Here's a snip of what the input file looks like:
>> >=====
>> >SET autocommit=1;
>> >
>> >#
>> ># Dumping data for table 'T_DMU_History'
>> >#
>> >
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299519);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299520);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299521);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299522);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299524);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299526);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299527);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299528);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299529);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299531);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299532);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299533);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299534);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299535);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298880);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298881);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298882);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298883);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298884);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298885);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298886);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298887);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298889);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298890);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298891);
>> >=====
>> >
>> >There are about 870000 records.
>> >
>> >I realize that using one INSERT per row is going to hurt, but I don't
>> control
>> >the format of the incoming data.
>> >
>> >Besides, I'd have thought this would be pretty quick regardless of how
>> clumsy
>> >the method was.
>> >
>> >Is that "autocommit" a problem? This is a bulk load into an empty table,
>> so
>> >I'm not worried about ACID.
>> >
>> >Any suggestions?
>> >
>> >
>> >
>> >
>> >
>> >Regards,
>> >
>> >Jerry Schwartz
>> >Global Information Incorporated
>> >195 Farmington Ave.
>> >Farmington, CT 06032
>> >
>> >860.674.8796 / FAX: 860.674.8341
>> >E-mail: je...@gii.co.jp
>> >Web site: www.the-infoshop.com
>> >
>> >
>> >
>> >
>> >
>> >--
>> >MySQL General Mailing List
>> >For list archives: http://lists.mysql.com/mysql
>> >To unsubscribe:    http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:    http://lists.mysql.com/mysql?unsub=dae...@daevid.com
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=wagnerbianch...@gmail.com
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to