Re: This just seems to slow
Multiple line insert is the better choice...it will be organized in transaction blocks of many lines and it will speed up data insertion. [bian...@mysql.com]# mysqldump -u root -p --all-databases -e file.dump -e: extended-inserts Best regards. -- Wagner Bianchi 2011/1/3 Daevid Vincent dae...@daevid.com Another option would be to mangle your insert statement with some other language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts instead. Something like: INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519), VALUES (13071, 299520), VALUES (13071, 299521), ... That will radically speed up the inserts. Also delete your INDEX / KEYs and add them at the very end instead. -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 8:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299527); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299528); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299529); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299531); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299532); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299533); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299534); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299535); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298880); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298881); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298882); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298883); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298884); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298885); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298886); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298887); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298889); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298890); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298891); = There are about 87 records. I realize that using one INSERT per row is going to hurt, but I don't control the format of the incoming data. Besides, I'd have thought this would be pretty quick regardless of how clumsy the method was. Is that autocommit a problem? This is a bulk load into an empty table, so I'm not worried about ACID. Any suggestions? Regards, Jerry Schwartz Global Information
ANN: New Year's Sale at Upscene Productions, 25% discount!
New Year's Sale at Upscene Productions, 25% discount! Dear reader, We wish you the best for 2011. We would like to point you to our New Year's Sale! Go to http://www.upscene.com and use coupon code NY2011 when ordering. With regards, Martijn Tonies Upscene Productions www.upscene.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
HNY-2011
Dear all, A very-very Happy New Year 2011 to all. May God Bless all of us to solve future problems. Thanks and Regards Adarsh Sharma -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: This just seems to slow
-Original Message- From: Wagner Bianchi [mailto:wagnerbianch...@gmail.com] Sent: Monday, January 03, 2011 4:42 AM To: Daevid Vincent Cc: mysql@lists.mysql.com; mos Subject: Re: This just seems to slow Multiple line insert is the better choice...it will be organized in transaction blocks of many lines and it will speed up data insertion. [bian...@mysql.com]# mysqldump -u root -p --all-databases -e file.dump [JS] If only I were using mysqldump :-(. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -e: extended-inserts Best regards. -- Wagner Bianchi 2011/1/3 Daevid Vincent dae...@daevid.com Another option would be to mangle your insert statement with some other language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts instead. Something like: INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519), VALUES (13071, 299520), VALUES (13071, 299521), ... That will radically speed up the inserts. Also delete your INDEX / KEYs and add them at the very end instead. -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 8:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299527); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299528); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299529); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299531); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299532); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299533); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299534); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299535); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298880); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298881); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298882); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298883); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298884); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298885); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298886); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298887); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298889); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298890); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298891); =
RE: This just seems to slow
-Original Message- From: Daevid Vincent [mailto:dae...@daevid.com] Sent: Sunday, January 02, 2011 11:49 PM To: mysql@lists.mysql.com Cc: 'mos' Subject: RE: This just seems to slow Another option would be to mangle your insert statement with some other language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts instead. Something like: INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519), VALUES (13071, 299520), VALUES (13071, 299521), ... That will radically speed up the inserts. [JS] I thought of that, but unfortunately this is just one of 25-odd tables (each in a different format, of course). Also delete your INDEX / KEYs and add them at the very end instead. [JS] Wouldn't it take as long to build the indices? I guess it probably wouldn't. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 8:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299527); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299528); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299529); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299531); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299532); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299533); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299534); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299535); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298880); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298881); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298882); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298883); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298884); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298885); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298886); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298887); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298889); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298890); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298891); = There are about 87 records. I realize that using one INSERT per row is going to hurt, but I don't control the format of the incoming data. Besides, I'd have thought this would be pretty quick regardless of how clumsy the method was. Is that autocommit a problem?
Re: This just seems to slow
On 1/3/2011 10:41, Jerry Schwartz wrote: -Original Message- From: Daevid Vincent [mailto:dae...@daevid.com] Sent: Sunday, January 02, 2011 11:49 PM ... Also delete your INDEX / KEYs and add them at the very end instead. [JS] Wouldn't it take as long to build the indices? I guess it probably wouldn't. It will not. MySQL does not grow or edit its index files incrementally, it computes a fresh on-disk index image for every change. Right now, you are doing a complete index rebuild for every row you add. If you add up the total work you are saving (index 121000 rows, index 121001 rows, index 121002 rows,...) then you can see a big improvement by waiting to put the indexes on the table at the very end of the process. http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html Yours, -- Shawn Green MySQL Principal Technical Support Engineer Oracle USA, Inc. Office: Blountville, TN -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: This just seems to slow
Okay, I have a confession to make: I have never gotten Load Data Infile or mysqlimport to work. Here's my CSV file, named t_dmu_history.txt: 13071,299519 13071,299520 13071,299521 13071,299522 13071,299524 13071,299526 13071,299527 ... Here's my mysqlimport command: mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --silent --fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost maintable_usa t_dmu_history.txt I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for Win32 (ia32) It runs for awhile, but I wind up with only one record: localhost select * from t_dmu_history; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | +--+--++ 1 row in set (0.00 sec) Obviously mysqlimport is parsing the input file incorrectly, but I don't know why. Here's the table itself: ++-+--+-+-++ | Field | Type| Null | Key | Default | Extra | ++-+--+-+-++ | t_dmu_history_id | int(11) | NO | PRI | NULL| auto_increment | | DM_History_DM_ID | int(11) | YES | MUL | NULL|| | DM_History_Customer_ID | int(11) | YES | MUL | NULL|| ++-+--+-+-++ Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 11:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299527); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299528); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299529); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299531); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299532); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299533); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299534); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299535); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13073, 298880);
mysqlimport doesn't work for me
sigh This works: localhost TRUNCATE t_dmu_history; Query OK, 0 rows affected (0.41 sec) localhost LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history FIELDS TERMINATED BY , (`dm_history_dm_id`,`dm_history_customer_id`); Query OK, 876211 rows affected (25.16 sec) Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0 localhost SELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | 299519 | |2 |13071 | 299520 | |3 |13071 | 299521 | |4 |13071 | 299522 | +--+--++ 4 rows in set (0.03 sec) This does not work: localhost TRUNCATE t_dmu_history; localhost quit C:\Users\Jerry\Documents\Access MySQL Productionmysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` --fields-terminated=',' --local --password=xxx --pipe --user=access --verbose maintable_usa t_dmu_history.txt Connecting to localhost Selecting database maintable_usa Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL Production/t_dmu_history.txt into t_dmu_history maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0 Warnings: 1752422 Disconnecting from localhost localhost SELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | |2 |13071 | NULL | |3 |13071 | NULL | |4 |13071 | NULL | +--+--++ 4 rows in set (0.00 sec) = Before you ask, the mysql CLI is also using a named pipe. Windows Vista 32-bit MySQL version 5.1.31-community Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32) What am I missing? Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: This just seems to slow
I much prefer LOAD DATA INFILE to mysqlimport. The issue looks like you have a file with two columns, and a table with three. You will probably need to be more specific about which columns map to which fields in the file. Please report the error with any commands you run. Also, most importantly, how slow is slow? Have you measured the import speed in terms of rows per second? The largest factor I have found that influences overall import speed is the innodb_buffer_pool_size. Make sure you're not running with the default size. A buffer pool that's large enough to contain the secondary indexes on the table will also help a lot. -Original Message- From: Jerry Schwartz [mailto:je...@gii.co.jp] Sent: Monday, January 03, 2011 7:56 AM To: 'mos'; mysql@lists.mysql.com Subject: RE: This just seems to slow Okay, I have a confession to make: I have never gotten Load Data Infile or mysqlimport to work. Here's my CSV file, named t_dmu_history.txt: 13071,299519 13071,299520 13071,299521 13071,299522 13071,299524 13071,299526 13071,299527 ... Here's my mysqlimport command: mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --silent --fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost maintable_usa t_dmu_history.txt I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for Win32 (ia32) It runs for awhile, but I wind up with only one record: localhost select * from t_dmu_history; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | +--+--++ 1 row in set (0.00 sec) Obviously mysqlimport is parsing the input file incorrectly, but I don't know why. Here's the table itself: ++-+--+-+-++ | Field | Type| Null | Key | Default | Extra | ++-+--+-+-++ | t_dmu_history_id | int(11) | NO | PRI | NULL| auto_increment | | DM_History_DM_ID | int(11) | YES | MUL | NULL|| | DM_History_Customer_ID | int(11) | YES | MUL | NULL|| ++-+--+-+-++ Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 11:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
Re: mysqlimport doesn't work for me
It's been a long time sine I used mysqlimport, but you might want to try: - using --fields-terminated-by rather than --fields-terminated - losing (or escaping) the backticks in --columns= - checking my.cnf to see if the client settings are the same for mysql and mysqlimport - checking user privileges (are you using the same account in both instances?) - checking the line delimiter and --lines-terminated-by FWIW, I always prefer tab-delimited files over comma-separated ones. This gets around a lot of i18n issues. / Carsten Den 03-01-2011 19:33, Jerry Schwartz skrev: sigh This works: localhostTRUNCATE t_dmu_history; Query OK, 0 rows affected (0.41 sec) localhostLOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history FIELDS TERMINATED BY , (`dm_history_dm_id`,`dm_history_customer_id`); Query OK, 876211 rows affected (25.16 sec) Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0 localhostSELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | 299519 | |2 |13071 | 299520 | |3 |13071 | 299521 | |4 |13071 | 299522 | +--+--++ 4 rows in set (0.03 sec) This does not work: localhostTRUNCATE t_dmu_history; localhostquit C:\Users\Jerry\Documents\Access MySQL Productionmysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` --fields-terminated=',' --local --password=xxx --pipe --user=access --verbose maintable_usa t_dmu_history.txt Connecting to localhost Selecting database maintable_usa Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL Production/t_dmu_history.txt into t_dmu_history maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0 Warnings: 1752422 Disconnecting from localhost localhostSELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | |2 |13071 | NULL | |3 |13071 | NULL | |4 |13071 | NULL | +--+--++ 4 rows in set (0.00 sec) = Before you ask, the mysql CLI is also using a named pipe. Windows Vista 32-bit MySQL version 5.1.31-community Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32) What am I missing? Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: This just seems to slow
-Original Message- From: Gavin Towey [mailto:gto...@ffn.com] Sent: Monday, January 03, 2011 1:47 PM To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com Subject: RE: This just seems to slow I much prefer LOAD DATA INFILE to mysqlimport. The issue looks like you have a file with two columns, and a table with three. You will probably need to be more specific about which columns map to which fields in the file. Please report the error with any commands you run. [JS] I gave this information in a new thread that I started, mysqlimport doesn't work for me. Also, most importantly, how slow is slow? Have you measured the import speed in terms of rows per second? [JS] Let's just say you could use tree rings as a timer. The largest factor I have found that influences overall import speed is the innodb_buffer_pool_size. Make sure you're not running with the default size. A buffer pool that's large enough to contain the secondary indexes on the table will also help a lot. [JS] I haven't done any tuning. I expected to do that later, when the application went live. (Don't worry about that.) Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: Jerry Schwartz [mailto:je...@gii.co.jp] Sent: Monday, January 03, 2011 7:56 AM To: 'mos'; mysql@lists.mysql.com Subject: RE: This just seems to slow Okay, I have a confession to make: I have never gotten Load Data Infile or mysqlimport to work. Here's my CSV file, named t_dmu_history.txt: 13071,299519 13071,299520 13071,299521 13071,299522 13071,299524 13071,299526 13071,299527 ... Here's my mysqlimport command: mysqlimport -uaccess -pxxx --delete -- columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --silent --fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost maintable_usa t_dmu_history.txt I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for Win32 (ia32) It runs for awhile, but I wind up with only one record: localhost select * from t_dmu_history; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | +--+--++ 1 row in set (0.00 sec) Obviously mysqlimport is parsing the input file incorrectly, but I don't know why. Here's the table itself: ++-+--+-+-++ | Field | Type| Null | Key | Default | Extra | ++-+--+-+-++ | t_dmu_history_id | int(11) | NO | PRI | NULL| auto_increment | | DM_History_DM_ID | int(11) | YES | MUL | NULL|| | DM_History_Customer_ID | int(11) | YES | MUL | NULL|| ++-+--+-+-++ Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 11:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' #
RE: This just seems to slow
Jerry, Try this: mysqlimport -uusername -ppassword --verbose --debug-info --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --fields-terminated-by=, --fields-optionally-enclosed-by=\ --lines-terminated-by=\r\n --host=localhost yourdbname t_dmu_history.txt I use Load Data Infile all the time, usually inside of a program like Delphi. Mike At 09:56 AM 1/3/2011, Jerry Schwartz wrote: Okay, I have a confession to make: I have never gotten Load Data Infile or mysqlimport to work. Here's my CSV file, named t_dmu_history.txt: 13071,299519 13071,299520 13071,299521 13071,299522 13071,299524 13071,299526 13071,299527 ... Here's my mysqlimport command: mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --silent --fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost maintable_usa t_dmu_history.txt I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for Win32 (ia32) It runs for awhile, but I wind up with only one record: localhost select * from t_dmu_history; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | +--+--++ 1 row in set (0.00 sec) Obviously mysqlimport is parsing the input file incorrectly, but I don't know why. Here's the table itself: ++-+--+-+-++ | Field | Type| Null | Key | Default | Extra | ++-+--+-+-++ | t_dmu_history_id | int(11) | NO | PRI | NULL| auto_increment | | DM_History_DM_ID | int(11) | YES | MUL | NULL|| | DM_History_Customer_ID | int(11) | YES | MUL | NULL|| ++-+--+-+-++ Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 11:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299522); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299524); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299526); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299527); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299528); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299529); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299531); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES
RE: mysqlimport doesn't work for me
-Original Message- From: Carsten Pedersen [mailto:cars...@bitbybit.dk] Sent: Monday, January 03, 2011 1:48 PM To: Jerry Schwartz Cc: 'mos'; mysql@lists.mysql.com Subject: Re: mysqlimport doesn't work for me It's been a long time sine I used mysqlimport, but you might want to try: - using --fields-terminated-by rather than --fields-terminated [JS] Good catch! Unfortunately, it didn't fix the problem: Connecting to localhost Selecting database maintable_usa Deleting the old data from table t_dmu_history Loading data from LOCAL file: C:/Users/Jerry/Documents/Access_MySQL Tests/t_dmu_history.txt into t_dmu_history maintable_usa.t_dmu_history: Records: 1 Deleted: 0 Skipped: 0 Warnings: 2 Disconnecting from localhost I don't know if there's any way to find out what those warnings are. - losing (or escaping) the backticks in --columns= [JS] The loons who designed this system were fond of putting spaces in the table names. I've tried this particular table with and without the back-ticks. Many of the table and field names are in Japanese, too. I shudder to think how that will work out. - checking my.cnf to see if the client settings are the same for mysql and mysqlimport [JS] Good thought. - checking user privileges (are you using the same account in both instances?) [JS] Yes. - checking the line delimiter and --lines-terminated-by FWIW, I always prefer tab-delimited files over comma-separated ones. This gets around a lot of i18n issues. [JS] No doubt. / Carsten Den 03-01-2011 19:33, Jerry Schwartz skrev: sigh This works: localhostTRUNCATE t_dmu_history; Query OK, 0 rows affected (0.41 sec) localhostLOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history FIELDS TERMINATED BY , (`dm_history_dm_id`,`dm_history_customer_id`); Query OK, 876211 rows affected (25.16 sec) Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0 localhostSELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | 299519 | |2 |13071 | 299520 | |3 |13071 | 299521 | |4 |13071 | 299522 | +--+--++ 4 rows in set (0.03 sec) This does not work: localhostTRUNCATE t_dmu_history; localhostquit C:\Users\Jerry\Documents\Access MySQL Productionmysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` --fields-terminated=',' --local --password=xxx --pipe --user=access --verbose maintable_usa t_dmu_history.txt Connecting to localhost Selecting database maintable_usa Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL Production/t_dmu_history.txt into t_dmu_history maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0 Warnings: 1752422 Disconnecting from localhost localhostSELECT * FROM t_dmu_history LIMIT 4; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | |2 |13071 | NULL | |3 |13071 | NULL | |4 |13071 | NULL | +--+--++ 4 rows in set (0.00 sec) = Before you ask, the mysql CLI is also using a named pipe. Windows Vista 32-bit MySQL version 5.1.31-community Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32) What am I missing? Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
tmpdir running out of space
Hello, everybody! I'm new to MySQL, and I hope somebody can help me. I have a 3.7G database and a 10G tmpdir. Sometimes the tmpdir runs out of space, and I get the following message on the logs: [ERROR] /usr/libexec/mysqld: Incorrect key file for table '/tmp/tmpdir/#sql_22f_33.MYI'; try to repair it Server load increases a lot, and MySQL eventually crashes. My question is: how can I find out what query is using so much tmpdir space? How much space should I allocate to this disk? Santiago Soares Fone: (41) 8488-0537
Re: tmpdir running out of space
I'm new to MySQL, and I hope somebody can help me. I have a 3.7G database and a 10G tmpdir. Sometimes the tmpdir runs out of space, and I get the following message on the logs: [ERROR] /usr/libexec/mysqld: Incorrect key file for table '/tmp/tmpdir/#sql_22f_33.MYI'; try to repair it Server load increases a lot, and MySQL eventually crashes. My question is: how can I find out what query is using so much tmpdir space? How much space should I allocate to this disk? Hi Santiago, Ensure the /tmp directory on large MySQL servers is greater than the value of myisam_max_sort_file_size. re: http://itsecureadmin.com/2010/02/mysql-tmp-usage-with-optimize-table-command/ Thanks, Josh Miller, RHCE/VCP -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: This just seems to slow
Folks, this is getting creepy. It seems like each of the variations you've given me works some times and not others. I haven't found the pattern yet. During the hours it took my initial data loads to finish, I rewrote the import process so that (I hope) I won't have to go through this again. Thanks for your help. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Monday, January 03, 2011 3:25 PM To: Jerry Schwartz; mysql@lists.mysql.com Subject: RE: This just seems to slow Jerry, Try this: mysqlimport -uusername -ppassword --verbose --debug-info --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --fields-terminated-by=, --fields-optionally-enclosed-by=\ --lines-terminated-by=\r\n --host=localhost yourdbname t_dmu_history.txt I use Load Data Infile all the time, usually inside of a program like Delphi. Mike At 09:56 AM 1/3/2011, Jerry Schwartz wrote: Okay, I have a confession to make: I have never gotten Load Data Infile or mysqlimport to work. Here's my CSV file, named t_dmu_history.txt: 13071,299519 13071,299520 13071,299521 13071,299522 13071,299524 13071,299526 13071,299527 ... Here's my mysqlimport command: mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local --silent --fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost maintable_usa t_dmu_history.txt I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for Win32 (ia32) It runs for awhile, but I wind up with only one record: localhost select * from t_dmu_history; +--+--++ | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID | +--+--++ |1 |13071 | NULL | +--+--++ 1 row in set (0.00 sec) Obviously mysqlimport is parsing the input file incorrectly, but I don't know why. Here's the table itself: ++-+--+-+-++ | Field | Type| Null | Key | Default | Extra | ++-+--+-+-++ | t_dmu_history_id | int(11) | NO | PRI | NULL| auto_increment | | DM_History_DM_ID | int(11) | YES | MUL | NULL|| | DM_History_Customer_ID | int(11) | YES | MUL | NULL|| ++-+--+-+-++ Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.the-infoshop.com -Original Message- From: mos [mailto:mo...@fastmail.fm] Sent: Sunday, January 02, 2011 11:42 PM To: mysql@lists.mysql.com Subject: Re: This just seems to slow Jerry, Use Load Data Infile when loading a lot of data. Whoever is giving you the data should be able to dump it to a CSV file. Your imports will be much faster. Mike At 07:51 PM 1/2/2011, you wrote: I'm trying to load data into a simple table, and it is taking many hours (and still not done). I know hardware, etc., can have a big effect, but NOTHING should have this big an effect. = us-gii show create table t_dmu_history\G *** 1. row *** Table: t_dmu_history Create Table: CREATE TABLE `t_dmu_history` ( `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT, `DM_History_DM_ID` int(11) DEFAULT NULL, `DM_History_Customer_ID` int(11) DEFAULT NULL, PRIMARY KEY (`t_dmu_history_id`), KEY `DM_History_DM_ID` (`DM_History_DM_ID`), KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8 = Here's a snip of what the input file looks like: = SET autocommit=1; # # Dumping data for table 'T_DMU_History' # INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299519); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299520); INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) VALUES (13071, 299521); INSERT INTO