RE: This just seems to slow

2011-01-04 Thread Jerry Schwartz
I did wind up using LOAD DATA INFILE.

When I started, I was afraid that I was going to process about 20 tables every 
day; but I redid the data exchange to avoid that.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Tuesday, January 04, 2011 12:26 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>At 05:08 PM 1/3/2011, Jerry Schwartz wrote:
>>Folks, this is getting creepy. It seems like each of the variations you've
>>given me works some times and not others. I haven't found the pattern yet.
>
>The SQL I sent you works on my server just fine with your table and your
>data.  BTW, there is no way to get a list of warnings produced from
>MySQLImport. You will need to switch to SQL and execute "Load Data InFile
>" and after that has executed, do a "Show Warnings" to display the
>warnings. I always use "Load Data Infile" and not MySQLImport because I
>have more control over its execution. The Load Data should be 10x faster
>than using Inserts so it is worth considering.
>
>The MySQL server prefers the import file to be in the server's data
>directory. If you are trying to import it from another location you need to
>change your my.ini file. Please see the article
>http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
>
>Mike
>
>
>>During the hours it took my initial data loads to finish, I rewrote the
>>import
>>process so that (I hope) I won't have to go through this again.
>>
>>Thanks for your help.
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-Original Message-
>> >From: mos [mailto:mo...@fastmail.fm]
>> >Sent: Monday, January 03, 2011 3:25 PM
>> >To: Jerry Schwartz; mysql@lists.mysql.com
>> >Subject: RE: This just seems to slow
>> >
>> >Jerry,
>> > Try this:
>> >
>> >mysqlimport -uusername -ppassword --verbose --debug-info --delete
>> >--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>> >--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>> >--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>> >
>> >
>> >I use Load Data Infile all the time, usually inside of a program like
>> Delphi.
>> >
>> >Mike
>> >
>> >At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>> >>Okay, I have a confession to make: I have never gotten Load Data Infile 
>> >>or
>> >>mysqlimport to work.
>> >>
>> >>Here's my CSV file, named "t_dmu_history.txt":
>> >>
>> >>13071,299519
>> >>13071,299520
>> >>13071,299521
>> >>13071,299522
>> >>13071,299524
>> >>13071,299526
>> >>13071,299527
>> >>...
>> >>
>> >>Here's my mysqlimport command:
>> >>
>> >>mysqlimport -uaccess -pxxx --delete
>> >>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>> >>  --local --silent
>> >>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>> >>maintable_usa t_dmu_history.txt
>> >>
>> >>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>> >>for
>> >>Win32 (ia32)"
>> >>
>> >>It runs for awhile, but I wind up with only one record:
>> >>
>> >>localhost >select * from t_dmu_history;
>> >>+--+--++
>> >>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> >>+--+--++
>> >>|1 |13071 |   NULL |
>> >>+--+--++
>> >>1 row in set (0.00 sec)
>> >>
>> >>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>> >>know
>> >>why.
>> >>
>> >>Here's the table itself:
>> 

RE: This just seems to slow

2011-01-04 Thread mos

At 05:08 PM 1/3/2011, Jerry Schwartz wrote:

Folks, this is getting creepy. It seems like each of the variations you've
given me works some times and not others. I haven't found the pattern yet.


The SQL I sent you works on my server just fine with your table and your 
data.  BTW, there is no way to get a list of warnings produced from 
MySQLImport. You will need to switch to SQL and execute "Load Data InFile 
" and after that has executed, do a "Show Warnings" to display the 
warnings. I always use "Load Data Infile" and not MySQLImport because I 
have more control over its execution. The Load Data should be 10x faster 
than using Inserts so it is worth considering.


The MySQL server prefers the import file to be in the server's data 
directory. If you are trying to import it from another location you need to 
change your my.ini file. Please see the article 
http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html


Mike


During the hours it took my initial data loads to finish, I rewrote the 
import

process so that (I hope) I won't have to go through this again.

Thanks for your help.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Monday, January 03, 2011 3:25 PM
>To: Jerry Schwartz; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Jerry,
> Try this:
>
>mysqlimport -uusername -ppassword --verbose --debug-info --delete
>--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>
>
>I use Load Data Infile all the time, usually inside of a program like 
Delphi.

>
>Mike
>
>At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>>Okay, I have a confession to make: I have never gotten Load Data Infile or
>>mysqlimport to work.
>>
>>Here's my CSV file, named "t_dmu_history.txt":
>>
>>13071,299519
>>13071,299520
>>13071,299521
>>13071,299522
>>13071,299524
>>13071,299526
>>13071,299527
>>...
>>
>>Here's my mysqlimport command:
>>
>>mysqlimport -uaccess -pxxx --delete
>>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>>  --local --silent
>>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>>maintable_usa t_dmu_history.txt
>>
>>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>>for
>>Win32 (ia32)"
>>
>>It runs for awhile, but I wind up with only one record:
>>
>>localhost >select * from t_dmu_history;
>>+--+--++
>>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>>+--+--++
>>|1 |13071 |   NULL |
>>+--+--++
>>1 row in set (0.00 sec)
>>
>>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>>know
>>why.
>>
>>Here's the table itself:
>>
>>++-+--+-+-+- 
---+
>>| Field  | Type| Null | Key | Default | 
Extra  |
>>++-+--+-+-+- 
---+
>>| t_dmu_history_id   | int(11) | NO   | PRI | NULL| 
auto_increment |
>>| DM_History_DM_ID   | int(11) | YES  | MUL | 
NULL||
>>| DM_History_Customer_ID | int(11) | YES  | MUL | 
NULL||
>>++-+--+-+-+- 
---+

>>
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
Folks, this is getting creepy. It seems like each of the variations you've 
given me works some times and not others. I haven't found the pattern yet.

During the hours it took my initial data loads to finish, I rewrote the import 
process so that (I hope) I won't have to go through this again.

Thanks for your help.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Monday, January 03, 2011 3:25 PM
>To: Jerry Schwartz; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Jerry,
> Try this:
>
>mysqlimport -uusername -ppassword --verbose --debug-info --delete
>--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>
>
>I use Load Data Infile all the time, usually inside of a program like Delphi.
>
>Mike
>
>At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>>Okay, I have a confession to make: I have never gotten Load Data Infile or
>>mysqlimport to work.
>>
>>Here's my CSV file, named "t_dmu_history.txt":
>>
>>13071,299519
>>13071,299520
>>13071,299521
>>13071,299522
>>13071,299524
>>13071,299526
>>13071,299527
>>...
>>
>>Here's my mysqlimport command:
>>
>>mysqlimport -uaccess -pxxx --delete
>>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>>  --local --silent
>>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>>maintable_usa t_dmu_history.txt
>>
>>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, 
>>for
>>Win32 (ia32)"
>>
>>It runs for awhile, but I wind up with only one record:
>>
>>localhost >select * from t_dmu_history;
>>+--+--++
>>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>>+--+--++
>>|1 |13071 |   NULL |
>>+--+--++
>>1 row in set (0.00 sec)
>>
>>Obviously mysqlimport is parsing the input file incorrectly, but I don't 
>>know
>>why.
>>
>>Here's the table itself:
>>
>>++-+--+-+-++
>>| Field  | Type| Null | Key | Default | Extra  |
>>++-+--+-+-++
>>| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
>>| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
>>| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
>>++-+--+-+-++
>>
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-Original Message-
>> >From: mos [mailto:mo...@fastmail.fm]
>> >Sent: Sunday, January 02, 2011 11:42 PM
>> >To: mysql@lists.mysql.com
>> >Subject: Re: This just seems to slow
>> >
>> >Jerry,
>> >   Use "Load Data Infile" when loading a lot of data. Whoever is 
>> > giving
>> >you the data should be able to dump it to a CSV file. Your imports will be
>> >much faster.
>> >
>> >Mike
>> >
>> >At 07:51 PM 1/2/2011, you wrote:
>> >>I'm trying to load data into a simple table, and it is taking many hours
>> >>(and
>> >>still not done). I know hardware, etc., can have a big effect, but 
>> >>NOT

RE: This just seems to slow

2011-01-03 Thread mos

Jerry,
Try this:

mysqlimport -uusername -ppassword --verbose --debug-info --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local 
--fields-terminated-by=","  --fields-optionally-enclosed-by="\"" 
--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt



I use Load Data Infile all the time, usually inside of a program like Delphi.

Mike

At 09:56 AM 1/3/2011, Jerry Schwartz wrote:

Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.

Here's my CSV file, named "t_dmu_history.txt":

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID`

 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
Win32 (ia32)"

It runs for awhile, but I wind up with only one record:

localhost >select * from t_dmu_history;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
+--+--++
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know
why.

Here's the table itself:

++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
++-+--+-+-++

   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Gavin Towey [mailto:gto...@ffn.com]
>Sent: Monday, January 03, 2011 1:47 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have 
>a
>file with two columns, and a table with three.  You will probably need to be
>more specific about which columns map to which fields in the file.  Please
>report the error with any commands you run.
>
[JS] I gave this information in a new thread that I started, "mysqlimport 
doesn't work for me."

>Also, most importantly, how slow is "slow?"  Have you measured the import 
>speed
>in terms of rows per second?
>
[JS] Let's just say you could use tree rings as a timer.

>The largest factor I have found that influences overall import speed is the
>innodb_buffer_pool_size.  Make sure you're not running with the default size.
>A buffer pool that's large enough to contain the secondary indexes on the 
>table
>will also help a lot.
>
[JS] I haven't done any tuning. I expected to do that later, when the 
application went live. (Don't worry about that.)

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>
>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Monday, January 03, 2011 7:56 AM
>To: 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Okay, I have a confession to make: I have never gotten Load Data Infile or
>mysqlimport to work.
>
>Here's my CSV file, named "t_dmu_history.txt":
>
>13071,299519
>13071,299520
>13071,299521
>13071,299522
>13071,299524
>13071,299526
>13071,299527
>...
>
>Here's my mysqlimport command:
>
>mysqlimport -uaccess -pxxx --delete --
>columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> --local --silent
>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>maintable_usa t_dmu_history.txt
>
>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
>Win32 (ia32)"
>
>It runs for awhile, but I wind up with only one record:
>
>localhost >select * from t_dmu_history;
>+--+--++
>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>+--+--++
>|1 |13071 |   NULL |
>+--+--++
>1 row in set (0.00 sec)
>
>Obviously mysqlimport is parsing the input file incorrectly, but I don't know
>why.
>
>Here's the table itself:
>
>++-+--+-+-++
>| Field  | Type| Null | Key | Default | Extra  |
>++-+--+-+-++
>| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
>| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
>| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
>++-+--+-+-++
>
>   Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
>  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>  `DM_History_DM_ID` int(11) DEFAULT NULL,
>  `DM_History_Customer_ID` int(11) DEFAULT NULL,
>  PRIMARY KEY (`t_dmu_history_id`),
>  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: je...@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>>-Original Message-
>>From: mos [mailto:mo...@fastmail.fm]
>>Sent: Sunday, January 02, 2011 11:42 PM
>>To: mysql@lists.mysql.com
>>Subject: Re: This just seems to slow
>>
>>Jerry,
>>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>>you the data should be able to dump it to a CSV file. Your imports will be
>>much faster.
>>
>>Mike
>>
>>At 07:51 PM 1/2/2011, you wrote:
>>>I'm trying to load data into a simple table, and it is taking many hours
>>>(and
>>>still not done). I know hardware, etc., can have a big eff

RE: This just seems to slow

2011-01-03 Thread Gavin Towey
I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have a 
file with two columns, and a table with three.  You will probably need to be 
more specific about which columns map to which fields in the file.  Please 
report the error with any commands you run.

Also, most importantly, how slow is "slow?"  Have you measured the import speed 
in terms of rows per second?

The largest factor I have found that influences overall import speed is the 
innodb_buffer_pool_size.  Make sure you're not running with the default size.  
A buffer pool that's large enough to contain the secondary indexes on the table 
will also help a lot.



-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Monday, January 03, 2011 7:56 AM
To: 'mos'; mysql@lists.mysql.com
Subject: RE: This just seems to slow

Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.

Here's my CSV file, named "t_dmu_history.txt":

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
Win32 (ia32)"

It runs for awhile, but I wind up with only one record:

localhost >select * from t_dmu_history;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
+--+--++
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know
why.

Here's the table itself:

++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
++-+--+-+-++

   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520)

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
Okay, I have a confession to make: I have never gotten Load Data Infile or 
mysqlimport to work.

Here's my CSV file, named "t_dmu_history.txt":

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID` 
 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost 
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for 
Win32 (ia32)"

It runs for awhile, but I wind up with only one record:

localhost >select * from t_dmu_history;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
+--+--++
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know 
why.

Here's the table itself:

++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
++-+--+-+-++

   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours 
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_Histor

Re: This just seems to slow

2011-01-03 Thread Shawn Green (MySQL)

On 1/3/2011 10:41, Jerry Schwartz wrote:

-Original Message-
From: Daevid Vincent [mailto:dae...@daevid.com]
Sent: Sunday, January 02, 2011 11:49 PM
...



Also delete your INDEX / KEYs and add them at the very end instead.


[JS] Wouldn't it take as long to build the indices? I guess it probably
wouldn't.



It will not. MySQL does not "grow" or "edit" its index files 
incrementally, it computes a fresh on-disk index image for every change.


Right now, you are doing a complete index rebuild for every row you add. 
If you add up the total work you are saving (index 121000 rows, index 
121001 rows, index 121002 rows,...) then you can see a big improvement 
by waiting to put the indexes on the table at the very end of the process.


http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html

Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Daevid Vincent [mailto:dae...@daevid.com]
>Sent: Sunday, January 02, 2011 11:49 PM
>To: mysql@lists.mysql.com
>Cc: 'mos'
>Subject: RE: This just seems to slow
>
>Another option would be to mangle your insert statement with some other
>language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>instead. Something like:
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519),
>VALUES (13071, 299520),
>VALUES (13071, 299521),
>...
>
>That will radically speed up the inserts.
>
[JS] I thought of that, but unfortunately this is just one of 25-odd tables 
(each in a different format, of course).

>Also delete your INDEX / KEYs and add them at the very end instead.
>
[JS] Wouldn't it take as long to build the indices? I guess it probably 
wouldn't.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 8:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_Hi

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Wagner Bianchi [mailto:wagnerbianch...@gmail.com]
>Sent: Monday, January 03, 2011 4:42 AM
>To: Daevid Vincent
>Cc: mysql@lists.mysql.com; mos
>Subject: Re: This just seems to slow
>
>Multiple line insert is the better choice...it will be organized in
>transaction blocks of many lines and it will speed up data insertion.
>
>[bian...@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump
>
[JS] If only I were using mysqldump :-(.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>-e: extended-inserts
>
>Best regards.
>--
>Wagner Bianchi
>
>2011/1/3 Daevid Vincent 
>
>> Another option would be to mangle your insert statement with some other
>> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>> instead. Something like:
>>
>> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> VALUES (13071, 299519),
>> VALUES (13071, 299520),
>> VALUES (13071, 299521),
>> ...
>>
>> That will radically speed up the inserts.
>>
>> Also delete your INDEX / KEYs and add them at the very end instead.
>>
>> -Original Message-----
>> From: mos [mailto:mo...@fastmail.fm]
>> Sent: Sunday, January 02, 2011 8:42 PM
>> To: mysql@lists.mysql.com
>> Subject: Re: This just seems to slow
>>
>> Jerry,
>>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>> you the data should be able to dump it to a CSV file. Your imports will be
>> much faster.
>>
>> Mike
>>
>> At 07:51 PM 1/2/2011, you wrote:
>> >I'm trying to load data into a simple table, and it is taking many hours
>> (and
>> >still not done). I know hardware, etc., can have a big effect, but NOTHING
>> >should have this big an effect.
>> >
>> >=
>> >us-gii >show create table t_dmu_history\G
>> >*** 1. row ***
>> >Table: t_dmu_history
>> >Create Table: CREATE TABLE `t_dmu_history` (
>> >   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >   `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >   PRIMARY KEY (`t_dmu_history_id`),
>> >   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >=
>> >
>> >Here's a snip of what the input file looks like:
>> >=
>> >SET autocommit=1;
>> >
>> >#
>> ># Dumping data for table 'T_DMU_History'
>> >#
>> >
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299519);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299520);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299521);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299522);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299524);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299526);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299527);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299528);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299529);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299531);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299532);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299533);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299534);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299535);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298880);
>&

Re: This just seems to slow

2011-01-03 Thread Wagner Bianchi
Multiple line insert is the better choice...it will be organized in
transaction blocks of many lines and it will speed up data insertion.

[bian...@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump

-e: extended-inserts

Best regards.
--
Wagner Bianchi

2011/1/3 Daevid Vincent 

> Another option would be to mangle your insert statement with some other
> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
> instead. Something like:
>
> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> VALUES (13071, 299519),
> VALUES (13071, 299520),
> VALUES (13071, 299521),
> ...
>
> That will radically speed up the inserts.
>
> Also delete your INDEX / KEYs and add them at the very end instead.
>
> -Original Message-
> From: mos [mailto:mo...@fastmail.fm]
> Sent: Sunday, January 02, 2011 8:42 PM
> To: mysql@lists.mysql.com
> Subject: Re: This just seems to slow
>
> Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
> you the data should be able to dump it to a CSV file. Your imports will be
> much faster.
>
> Mike
>
> At 07:51 PM 1/2/2011, you wrote:
> >I'm trying to load data into a simple table, and it is taking many hours
> (and
> >still not done). I know hardware, etc., can have a big effect, but NOTHING
> >should have this big an effect.
> >
> >=
> >us-gii >show create table t_dmu_history\G
> >*** 1. row ***
> >Table: t_dmu_history
> >Create Table: CREATE TABLE `t_dmu_history` (
> >   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> >   `DM_History_DM_ID` int(11) DEFAULT NULL,
> >   `DM_History_Customer_ID` int(11) DEFAULT NULL,
> >   PRIMARY KEY (`t_dmu_history_id`),
> >   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> >   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
> >=
> >
> >Here's a snip of what the input file looks like:
> >=
> >SET autocommit=1;
> >
> >#
> ># Dumping data for table 'T_DMU_History'
> >#
> >
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299519);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299520);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299521);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299522);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299524);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299526);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299527);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299528);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299529);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299531);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299532);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299533);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299534);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299535);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298880);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298881);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298882);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298883);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298884);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298885);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298886);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298887);
> >INSERT INTO `T_DMU_History` (`DM_Histo

RE: This just seems to slow

2011-01-02 Thread Daevid Vincent
Another option would be to mangle your insert statement with some other
language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
instead. Something like:

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519),
VALUES (13071, 299520),
VALUES (13071, 299521),
...

That will radically speed up the inserts.

Also delete your INDEX / KEYs and add them at the very end instead. 

-Original Message-
From: mos [mailto:mo...@fastmail.fm] 
Sent: Sunday, January 02, 2011 8:42 PM
To: mysql@lists.mysql.com
Subject: Re: This just seems to slow

Jerry,
   Use "Load Data Infile" when loading a lot of data. Whoever is giving 
you the data should be able to dump it to a CSV file. Your imports will be 
much faster.

Mike

At 07:51 PM 1/2/2011, you wrote:
>I'm trying to load data into a simple table, and it is taking many hours
(and
>still not done). I know hardware, etc., can have a big effect, but NOTHING
>should have this big an effect.
>
>=
>us-gii >show create table t_dmu_history\G
>*** 1. row ***
>Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>   PRIMARY KEY (`t_dmu_history_id`),
>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>=
>
>Here's a snip of what the input file looks like:
>=
>SET autocommit=1;
>
>#
># Dumping data for table 'T_DMU_History'
>#
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299520);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299521);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299522);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299524);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299526);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299527);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299528);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299529);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299531);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299532);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299533);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299534);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299535);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298880);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298881);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298882);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298883);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298884);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298885);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298886);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298887);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298889);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298890);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298891);
>=
>
>There are about 87 records.
>
>I realize that using one INSERT per row is going to hurt, but I don't
control
>the format of the incoming data.
>
>Besides, I'd have thought this would be pretty quick regardless of how
clumsy
>the method was.
>
>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>I'm not worried about ACID.
>
>Any suggestions?
>
>
>
>
>
>Regards,
>
>Jerry Schwartz

Re: This just seems to slow

2011-01-02 Thread mos

Jerry,
  Use "Load Data Infile" when loading a lot of data. Whoever is giving 
you the data should be able to dump it to a CSV file. Your imports will be 
much faster.


Mike

At 07:51 PM 1/2/2011, you wrote:

I'm trying to load data into a simple table, and it is taking many hours (and
still not done). I know hardware, etc., can have a big effect, but NOTHING
should have this big an effect.

=
us-gii >show create table t_dmu_history\G
*** 1. row ***
   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't control
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how clumsy
the method was.

Is that "autocommit" a problem? This is a bulk load into an empty table, so
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



This just seems to slow

2011-01-02 Thread Jerry Schwartz
I'm trying to load data into a simple table, and it is taking many hours (and 
still not done). I know hardware, etc., can have a big effect, but NOTHING 
should have this big an effect.

=
us-gii >show create table t_dmu_history\G
*** 1. row ***
   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't control 
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how clumsy 
the method was.

Is that "autocommit" a problem? This is a bulk load into an empty table, so 
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org