kill LOAD DATA INFILE

2014-03-24 Thread Rafael Valenzuela
Hi all,
I've a question, i need to killing a  load data in file. Normally  used
show processlist and kill PID.  But don't work.
any idea?
Thanks :D

{ name : Rafael Valenzuela,

  open source: [Saiku Admin Console,Anaytical Sport],

  location : Madrid Spain,

  twitter  : [@sowe https://twitter.com/sowe] ,

  linkedin : profilehttp://es.linkedin.com/pub/rafael-valenzuela/19/694/61a/
}


Re: LOAD DATA INFILE with space after quote but before comma

2013-12-19 Thread hsv
 2013/12/18 11:07 -0500, Anthony Ball 
I ran across a curious issue, I'd call it a bug but I'm sure others would
call it a feature.

I have a csv file with space between the  and , and it causes MySQL to eat
that field and the field after it as a single field. Is there a setting I
can use to remedy this or do I just have to make sure no whitespace
intrudes? 

Well, strictly speaking, it is a bug, in your file. If you can keep that from 
happening that is best, because in a CSV file the quotemark may appear only 
first, last, or next to a separator, unless it quotes another quote-mark.

Otherwise, if it is consistent as in Dhaval Jaiswal s (2), only do as he 
suggests.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



LOAD DATA INFILE with space after quote but before comma

2013-12-18 Thread Anthony Ball
I ran across a curious issue, I'd call it a bug but I'm sure others would
call it a feature.

I have a csv file with space between the  and , and it causes MySQL to eat
that field and the field after it as a single field. Is there a setting I
can use to remedy this or do I just have to make sure no whitespace
intrudes?

Here is an example:

testa ,testb



create temporary table testa (a char(15), b char(5)); LOAD DATA LOCAL
INFILE '/tmp/test.csv' INTO TABLE testa FIELDS TERMINATED BY ',' OPTIONALLY
ENCLOSED BY '';



Data in table is

mysql select * from testa;
++--+
| a  | b|
++--+
| testa ,testb | NULL |
++--+


Re: LOAD DATA INFILE with space after quote but before comma

2013-12-18 Thread Dhaval Jaiswal
(1)
yes it is an issue even i faced. for the remedy i search the {( ,)  (,)}
values of  , space between  and ,  replaced by , in .csv itself.

(2)
The other way is, if all the values are like space between  , then you can
use space and , in fields terminated by

 LOAD DATA LOCAL INFILE '/tmp/test.csv' INTO TABLE testa FIELDS TERMINATED
BY ' ,' OPTIONALLY
ENCLOSED BY '';

(3) convert the .csv in insert statement and you can use mysqlimport.




On Wed, Dec 18, 2013 at 9:37 PM, Anthony Ball a...@suave.net wrote:

 I ran across a curious issue, I'd call it a bug but I'm sure others would
 call it a feature.

 I have a csv file with space between the  and , and it causes MySQL to eat
 that field and the field after it as a single field. Is there a setting I
 can use to remedy this or do I just have to make sure no whitespace
 intrudes?

 Here is an example:

 testa ,testb



 create temporary table testa (a char(15), b char(5)); LOAD DATA LOCAL
 INFILE '/tmp/test.csv' INTO TABLE testa FIELDS TERMINATED BY ',' OPTIONALLY
 ENCLOSED BY '';



 Data in table is

 mysql select * from testa;
 ++--+
 | a  | b|
 ++--+
 | testa ,testb | NULL |
 ++--+




-- 

Regards
Dhaval


LOAD DATA INFILE Problem

2011-11-10 Thread spikerlion
Hello,

after switching from MySQL 5.0.67 to 5.1.59 we get the following problem:

We want to import data from a textfile 
example of the textfile:
t...@test.com$yes$432145$xyz

The command is: LOAD DATA LOCAL INFILE 'textfile.txt' INTO TABLE TESTTABLE 
FIELDS TERMINATED BY '$';

Selecting the data from the TESTTABLE by select * from TESTTABLE; the data in 
the first field looks like |test.com, if I do a select * from TESTTABLE\G I 
can see t...@test.com in the first field.

Are there any changes between MySQL 5.0 and 5.1 ?


Best Regards
Spiker
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Error in mysql replication with LOAD DATA INFILE

2010-12-20 Thread Anand Kumar
The application is designed to work such a way that it will process the csv
files daily as part of the aggregate process to calculate some metrics.

it runs fine on the master, when it come to slave through replicaiton it
fails with the error.

i even tried upgrading the slave to latest version mysql 5.1.53 after i see
some post on the internet saying we have some issues in the older version ,
but it keeps giving the same error.


thanks
Anand


On Mon, Dec 20, 2010 at 7:42 PM, who.cat win@gmail.com wrote:

 i wanna know you have done  LOAD DATA INFILE in master ,why are you tring
 to do it in the slave ?The master didn't replication the data to the master
 ?

 All you best
 
 What we are struggling for ?
 The life or the life ?




 On Mon, Dec 20, 2010 at 3:32 PM, Anand Kumar sanan...@gmail.com wrote:

  On Mon, Dec 20, 2010 at 9:00 AM, Anand anand@gmail.com wrote:

  Hi guys,
 
  i am facing a serious issue with my replication , i tried so many things
  but no luck.
 
  my replication is running with mysql 5.0.51a in master and 5.0.90 in
 slave.
 
  we run LOAD DATA INFILE in master to process some csv files and load it
  into a table, it runs perfectly well in master but when it comes to
 slave it
  stops with SQL SYNTAX error
 
  i tried running the LOAD DATA INFILE manually on the slave , but it says
  different error as below
 
  mysql load data  infile '/tmp/SQL_LOAD-4-3-161.data' into table t;
  ERROR 1085: The file '/tmp/SQL_LOAD-4-3-161.data' must be in the
 database
  directory or be readable by all
 
  when  i chcked the file persmission it is
 
  -rw-rw 1 mysql mysql  0 Dec 18 23:53  /tmp/SQL_LOAD-4-3-161
 
 
 
  snippet from my error log
  
 
  101219  0:06:32 [Note] Slave SQL thread initialized, starting
 replication
  in log '.000127' at position 923914670, relay log
  '/var/lib/mysql/slave-relay.02' position: 39311
  101219  0:06:32 [Note] Slave I/O thread: connected to master
  'repli_u...@221.131.104.66:3306',replication started in log '.000127'
 at
  position 946657303
  101219  0:06:33 [ERROR] Slave SQL: Error 'You have an error in your SQL
  syntax; check the manual that corresponds to your MySQL server version
 for
  the right syntax to use near ''' at line 1' on query. Default database:
  'caratlane_diamonds'. Query: 'LOAD DATA INFILE
 '/tmp/SQL_LOAD-4-3-161.data'
  IGNORE INTO  TABLE osc_diamond_backup FIELDS TERMINATED BY ',' ENCLOSED
 BY
  '' ESCAPED BY '', Error_code: 1064
  101219  0:06:33 [Warning] Slave: You have an error in your SQL syntax;
  check the manual that corresponds to your MySQL server version for the
 right
  syntax to use near ''' at line 1 Error_code: 1064
  101219  0:06:33 [ERROR] Error running query, slave SQL thread aborted.
 Fix
  the problem, and restart the slave SQL thread with SLAVE START. We
 stopped
  at log '.000127' position 926912155
 
 
 
  please help me fixing this ..
 
  thanks in advance..
 
  thanks
  Anand
 





Error in mysql replication with LOAD DATA INFILE

2010-12-19 Thread Anand
Hi guys,

i am facing a serious issue with my replication , i tried so many things but
no luck.

my replication is running with mysql 5.0.51a in master and 5.0.90 in slave.

we run LOAD DATA INFILE in master to process some csv files and load it into
a table, it runs perfectly well in master but when it comes to slave it
stops with SQL SYNTAX error

i tried running the LOAD DATA INFILE manually on the slave , but it says
different error as below

mysql load data  infile '/tmp/SQL_LOAD-4-3-161.data' into table t;
ERROR 1085: The file '/tmp/SQL_LOAD-4-3-161.data' must be in the database
directory or be readable by all

when  i chcked the file persmission it is

-rw-rw 1 mysql mysql  0 Dec 18 23:53  /tmp/SQL_LOAD-4-3-161



snippet from my error log


101219  0:06:32 [Note] Slave SQL thread initialized, starting replication in
log '.000127' at position 923914670, relay log
'/var/lib/mysql/slave-relay.02' position: 39311
101219  0:06:32 [Note] Slave I/O thread: connected to master
'repli_u...@221.131.104.66:3306',replication started in log '.000127' at
position 946657303
101219  0:06:33 [ERROR] Slave SQL: Error 'You have an error in your SQL
syntax; check the manual that corresponds to your MySQL server version for
the right syntax to use near ''' at line 1' on query. Default database:
'caratlane_diamonds'. Query: 'LOAD DATA INFILE '/tmp/SQL_LOAD-4-3-161.data'
IGNORE INTO  TABLE osc_diamond_backup FIELDS TERMINATED BY ',' ENCLOSED BY
'' ESCAPED BY '', Error_code: 1064
101219  0:06:33 [Warning] Slave: You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server version for the right
syntax to use near ''' at line 1 Error_code: 1064
101219  0:06:33 [ERROR] Error running query, slave SQL thread aborted. Fix
the problem, and restart the slave SQL thread with SLAVE START. We stopped
at log '.000127' position 926912155



please help me fixing this ..

thanks in advance..

thanks
Anand


Re: Error in mysql replication with LOAD DATA INFILE

2010-12-19 Thread Lee Gary
Hi Anand,

Just try 'load data local infile',it maybe work.

Eric

2010/12/20 Anand anand@gmail.com:
 Hi guys,

 i am facing a serious issue with my replication , i tried so many things but
 no luck.

 my replication is running with mysql 5.0.51a in master and 5.0.90 in slave.

 we run LOAD DATA INFILE in master to process some csv files and load it into
 a table, it runs perfectly well in master but when it comes to slave it
 stops with SQL SYNTAX error

 i tried running the LOAD DATA INFILE manually on the slave , but it says
 different error as below

 mysql load data  infile '/tmp/SQL_LOAD-4-3-161.data' into table t;
 ERROR 1085: The file '/tmp/SQL_LOAD-4-3-161.data' must be in the database
 directory or be readable by all

 when  i chcked the file persmission it is

 -rw-rw 1 mysql mysql          0 Dec 18 23:53  /tmp/SQL_LOAD-4-3-161



 snippet from my error log
 

 101219  0:06:32 [Note] Slave SQL thread initialized, starting replication in
 log '.000127' at position 923914670, relay log
 '/var/lib/mysql/slave-relay.02' position: 39311
 101219  0:06:32 [Note] Slave I/O thread: connected to master
 'repli_u...@221.131.104.66:3306',replication started in log '.000127' at
 position 946657303
 101219  0:06:33 [ERROR] Slave SQL: Error 'You have an error in your SQL
 syntax; check the manual that corresponds to your MySQL server version for
 the right syntax to use near ''' at line 1' on query. Default database:
 'caratlane_diamonds'. Query: 'LOAD DATA INFILE '/tmp/SQL_LOAD-4-3-161.data'
 IGNORE INTO  TABLE osc_diamond_backup FIELDS TERMINATED BY ',' ENCLOSED BY
 '' ESCAPED BY '', Error_code: 1064
 101219  0:06:33 [Warning] Slave: You have an error in your SQL syntax; check
 the manual that corresponds to your MySQL server version for the right
 syntax to use near ''' at line 1 Error_code: 1064
 101219  0:06:33 [ERROR] Error running query, slave SQL thread aborted. Fix
 the problem, and restart the slave SQL thread with SLAVE START. We stopped
 at log '.000127' position 926912155



 please help me fixing this ..

 thanks in advance..

 thanks
 Anand


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Error in mysql replication with LOAD DATA INFILE

2010-12-19 Thread Anand Kumar
On Mon, Dec 20, 2010 at 9:00 AM, Anand anand@gmail.com wrote:

 Hi guys,

 i am facing a serious issue with my replication , i tried so many things
 but no luck.

 my replication is running with mysql 5.0.51a in master and 5.0.90 in slave.

 we run LOAD DATA INFILE in master to process some csv files and load it
 into a table, it runs perfectly well in master but when it comes to slave it
 stops with SQL SYNTAX error

 i tried running the LOAD DATA INFILE manually on the slave , but it says
 different error as below

 mysql load data  infile '/tmp/SQL_LOAD-4-3-161.data' into table t;
 ERROR 1085: The file '/tmp/SQL_LOAD-4-3-161.data' must be in the database
 directory or be readable by all

 when  i chcked the file persmission it is

 -rw-rw 1 mysql mysql  0 Dec 18 23:53  /tmp/SQL_LOAD-4-3-161



 snippet from my error log
 

 101219  0:06:32 [Note] Slave SQL thread initialized, starting replication
 in log '.000127' at position 923914670, relay log
 '/var/lib/mysql/slave-relay.02' position: 39311
 101219  0:06:32 [Note] Slave I/O thread: connected to master
 'repli_u...@221.131.104.66:3306',replication started in log '.000127' at
 position 946657303
 101219  0:06:33 [ERROR] Slave SQL: Error 'You have an error in your SQL
 syntax; check the manual that corresponds to your MySQL server version for
 the right syntax to use near ''' at line 1' on query. Default database:
 'caratlane_diamonds'. Query: 'LOAD DATA INFILE '/tmp/SQL_LOAD-4-3-161.data'
 IGNORE INTO  TABLE osc_diamond_backup FIELDS TERMINATED BY ',' ENCLOSED BY
 '' ESCAPED BY '', Error_code: 1064
 101219  0:06:33 [Warning] Slave: You have an error in your SQL syntax;
 check the manual that corresponds to your MySQL server version for the right
 syntax to use near ''' at line 1 Error_code: 1064
 101219  0:06:33 [ERROR] Error running query, slave SQL thread aborted. Fix
 the problem, and restart the slave SQL thread with SLAVE START. We stopped
 at log '.000127' position 926912155



 please help me fixing this ..

 thanks in advance..

 thanks
 Anand



Load Data Infile Errors

2010-10-25 Thread James W. McKelvey

Hello,

I have a question about the execution cycle of LOAD DATA INFILE.
If I issue a large file via LDI LOCAL, I know that the file is copied to 
the MySQL server and executed there.


But at what point does the statement finish from the sender's point of view?

1) When the file is successfully copied?
2) When the file is copied and parsed?
3) When the file is completely processed?

I'm guessing 2).

The reason for asking is to determine what errors may be returned and 
how I can deal with them.


Is it possible for the file to be partially processed, say, inserting 
the first half of the rows? If 2) or 3), I would say no (barring some 
serious server error).


Since LOCAL implies IGNORE, is there any way to get the number of 
ignored rows? What about replace?


Ultimately I want to know under what conditions I should reissue the 
file, and whether or not that could introduce duplicate entries for 
tables with non-unique keys.


Thanks!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Load Data Infile Errors

2010-10-25 Thread Gavin Towey
The answer is 3 =)

With myisam tables, you can have partially complete statements.  That is if you 
get an error, all rows handled before the error are still in the table.  With 
innodb, an error generates a rollback and your table is returned to its state 
before the statement was run.

To find the actual number of rows processed when using REPLACE or IGNORE, see 
the ROW_COUNT() function:
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count

Regards,
Gavin Towey


-Original Message-
From: James W. McKelvey [mailto:james.w.mckel...@jpl.nasa.gov]
Sent: Monday, October 25, 2010 12:16 PM
To: mysql@lists.mysql.com
Subject: Load Data Infile Errors

Hello,

I have a question about the execution cycle of LOAD DATA INFILE.
If I issue a large file via LDI LOCAL, I know that the file is copied to
the MySQL server and executed there.

But at what point does the statement finish from the sender's point of view?

1) When the file is successfully copied?
2) When the file is copied and parsed?
3) When the file is completely processed?

I'm guessing 2).

The reason for asking is to determine what errors may be returned and
how I can deal with them.

Is it possible for the file to be partially processed, say, inserting
the first half of the rows? If 2) or 3), I would say no (barring some
serious server error).

Since LOCAL implies IGNORE, is there any way to get the number of
ignored rows? What about replace?

Ultimately I want to know under what conditions I should reissue the
file, and whether or not that could introduce duplicate entries for
tables with non-unique keys.

Thanks!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Any faster building primary/unique indexes after Load Data Infile?

2010-02-25 Thread Baron Schwartz
Hi,

On Sun, Feb 21, 2010 at 1:42 PM, mos mo...@fastmail.fm wrote:
 I am loading 35 million rows of data into an empty MyISAM table. This table
 has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes.

 Is it going to be any faster if I remove the indexes from the table before
 loading the data, load the data, then do an Alter Table .. add index 
 for all of the indexes?
 Or is it faster to just leave the indexes in place prior to loading the
 data.

 I know if the table is empty and optimized, the non-unique indexes will be
 built AFTER the data is loaded using Load Data Infile, but the unique and
 primary indexes will be built as the data is being loaded and this is going
 to slow down the import.

 There is no point doing a Disable Indexes on the table because this only
 affects non-unique indexes and that is already taken care of since the table
 is already empty and optimized.

 But if I remove the indexes from the empty table then load the data, then
 execute the Alter Table Add Index ... for all 4 indexes at one time, isn't
 the Alter Table going to create a copy of the table so it is just going to
 reload the data all over again?

Yes.  It is going to create a new table, copy the rows into it, and
then delete the old one.

 Is there any way to add a primary or unique index without copy the data all
 over again? Create Index ... can't be used to create a primary index.

Dirty hacks with .frm files and REPAIR TABLE have sometimes been known
to help in cases like this.  But it's not for the faint of heart.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Any faster building primary/unique indexes after Load Data Infile?

2010-02-25 Thread Ananda Kumar
Then before loading
alter table table_name disable keys;
load data
alter table table enable keys;

 This will enable faster data load and faster index rebuild.

regards
anandkl

On Fri, Feb 26, 2010 at 8:03 AM, Baron Schwartz ba...@xaprb.com wrote:

 Hi,

 On Sun, Feb 21, 2010 at 1:42 PM, mos mo...@fastmail.fm wrote:
  I am loading 35 million rows of data into an empty MyISAM table. This
 table
  has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes.
 
  Is it going to be any faster if I remove the indexes from the table
 before
  loading the data, load the data, then do an Alter Table .. add index 
  for all of the indexes?
  Or is it faster to just leave the indexes in place prior to loading the
  data.
 
  I know if the table is empty and optimized, the non-unique indexes will
 be
  built AFTER the data is loaded using Load Data Infile, but the unique and
  primary indexes will be built as the data is being loaded and this is
 going
  to slow down the import.
 
  There is no point doing a Disable Indexes on the table because this only
  affects non-unique indexes and that is already taken care of since the
 table
  is already empty and optimized.
 
  But if I remove the indexes from the empty table then load the data, then
  execute the Alter Table Add Index ... for all 4 indexes at one time,
 isn't
  the Alter Table going to create a copy of the table so it is just going
 to
  reload the data all over again?

 Yes.  It is going to create a new table, copy the rows into it, and
 then delete the old one.

  Is there any way to add a primary or unique index without copy the data
 all
  over again? Create Index ... can't be used to create a primary index.

 Dirty hacks with .frm files and REPAIR TABLE have sometimes been known
 to help in cases like this.  But it's not for the faint of heart.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Any faster building primary/unique indexes after Load Data Infile?

2010-02-21 Thread mos
I am loading 35 million rows of data into an empty MyISAM table. This table 
has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes.


Is it going to be any faster if I remove the indexes from the table before 
loading the data, load the data, then do an Alter Table .. add index  
for all of the indexes?

Or is it faster to just leave the indexes in place prior to loading the data.

I know if the table is empty and optimized, the non-unique indexes will be 
built AFTER the data is loaded using Load Data Infile, but the unique and 
primary indexes will be built as the data is being loaded and this is going 
to slow down the import.


There is no point doing a Disable Indexes on the table because this only 
affects non-unique indexes and that is already taken care of since the 
table is already empty and optimized.


But if I remove the indexes from the empty table then load the data, then 
execute the Alter Table Add Index ... for all 4 indexes at one time, isn't 
the Alter Table going to create a copy of the table so it is just going to 
reload the data all over again?


Is there any way to add a primary or unique index without copy the data all 
over again? Create Index ... can't be used to create a primary index.


TIA
Mike

MySQL 5.1


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Any faster building primary/unique indexes after Load Data Infile?

2010-02-21 Thread Carsten Pedersen
Generally, you should find that removing and re-adding the indexes will 
speed up your operation.


I do not believe that ALTER TABLE with just index additions will require 
a table rebuild, but even if it does, doing a table copy will be a 
fairly fast operation (much faster than loading from other sources).


Don't forget to set the MyISAM sort buffer size high while you create 
the indexes.


/ Carsten

mos skrev:
I am loading 35 million rows of data into an empty MyISAM table. This 
table has 1 primary key (AutoInc) and 1 unique index and 2 non-unique 
indexes.


Is it going to be any faster if I remove the indexes from the table 
before loading the data, load the data, then do an Alter Table .. add 
index  for all of the indexes?
Or is it faster to just leave the indexes in place prior to loading the 
data.


I know if the table is empty and optimized, the non-unique indexes will 
be built AFTER the data is loaded using Load Data Infile, but the unique 
and primary indexes will be built as the data is being loaded and this 
is going to slow down the import.


There is no point doing a Disable Indexes on the table because this only 
affects non-unique indexes and that is already taken care of since the 
table is already empty and optimized.


But if I remove the indexes from the empty table then load the data, 
then execute the Alter Table Add Index ... for all 4 indexes at one 
time, isn't the Alter Table going to create a copy of the table so it is 
just going to reload the data all over again?


Is there any way to add a primary or unique index without copy the data 
all over again? Create Index ... can't be used to create a primary index.


TIA
Mike

MySQL 5.1




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Load Data Infile quirk

2009-10-19 Thread mos

At 05:40 AM 10/18/2009, John wrote:

Mike,

What behaviour you experience depends to some extent on what storage engine
you are using and on what other non-unique indexes you have on the tables.

With LOAD DATA INFILE on empty MyISAM tables all non-unique indexes are
created in a separate batch which makes it much faster if you have a lot of
indexes.


Ok, I thought that ALL indexes would be rebuilt later, including my primary 
index, and one unique index I have on the table. I must have misread that 
in the manual. Thanks.



From memory you can create the indexes faster by turning them off
with 'ALTER TABLE tablename DISABLE KEYS' before the 'LOAD DATA INFILE'
command and then using 'ALTER TABLE tablename ENABLE KEYS' to re-create the
indexes after the LOAD DATA INFILE completes.


But Disable Keys has no affect on primary or unique indexes. So the only 
way for me to speed this up on loading data into empty tables is to remove 
all indexes and build them after the data has been loaded. That should save 
me 30% on the load times.


Mike




Regards

John Daisley
MySQL  Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk

-Original Message-
From: mos [mailto:mo...@fastmail.fm]
Sent: 17 October 2009 22:49
To: mysql@lists.mysql.com
Subject: Load Data Infile quirk

I'm trying to speed up Load Data Infile and after some experimenting have
noticed this qwirk.

BTW, all of the tables used below are empty and have identical table
structures. The value being loaded into the primary key column is 'NULL'.

Test1:
246 seconds to run Load Data Infile into a table (Table1) with 1 primary
autoinc column, and 2 compound keys.

Test2:
  69 seconds to  run Load Data Infile into similar table (Table2) with no
keys
111 seconds to rebuild the missing keys in Table2

69+111=180 seconds for Table2 compared to 246 seconds for Table1.

Now I thought when using Load Data Infile on an empty table it would
rebuild *all* of the keys AFTER the data has been loaded. This may not be
the case. I suspect the extra time for
Test1 is caused by the Load Data building the primary key as the data is
being loaded.

Can someone confirm this?
If so, then when loading data into an empty table, it is always going to be
faster to remove the keys then load the data, then add the keys.

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.422 / Virus Database: 270.14.20/2441 - Release Date: 10/16/09
18:39:00


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Load Data Infile quirk

2009-10-18 Thread John
Mike,

What behaviour you experience depends to some extent on what storage engine
you are using and on what other non-unique indexes you have on the tables.

With LOAD DATA INFILE on empty MyISAM tables all non-unique indexes are
created in a separate batch which makes it much faster if you have a lot of
indexes. From memory you can create the indexes faster by turning them off
with 'ALTER TABLE tablename DISABLE KEYS' before the 'LOAD DATA INFILE'
command and then using 'ALTER TABLE tablename ENABLE KEYS' to re-create the
indexes after the LOAD DATA INFILE completes.

Regards

John Daisley
MySQL  Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk

-Original Message-
From: mos [mailto:mo...@fastmail.fm] 
Sent: 17 October 2009 22:49
To: mysql@lists.mysql.com
Subject: Load Data Infile quirk

I'm trying to speed up Load Data Infile and after some experimenting have 
noticed this qwirk.

BTW, all of the tables used below are empty and have identical table 
structures. The value being loaded into the primary key column is 'NULL'.

Test1:
246 seconds to run Load Data Infile into a table (Table1) with 1 primary 
autoinc column, and 2 compound keys.

Test2:
  69 seconds to  run Load Data Infile into similar table (Table2) with no
keys
111 seconds to rebuild the missing keys in Table2

69+111=180 seconds for Table2 compared to 246 seconds for Table1.

Now I thought when using Load Data Infile on an empty table it would 
rebuild *all* of the keys AFTER the data has been loaded. This may not be 
the case. I suspect the extra time for
Test1 is caused by the Load Data building the primary key as the data is 
being loaded.

Can someone confirm this?
If so, then when loading data into an empty table, it is always going to be 
faster to remove the keys then load the data, then add the keys.

Mike


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk

No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.422 / Virus Database: 270.14.20/2441 - Release Date: 10/16/09
18:39:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Load Data Infile quirk

2009-10-17 Thread mos
I'm trying to speed up Load Data Infile and after some experimenting have 
noticed this qwirk.


BTW, all of the tables used below are empty and have identical table 
structures. The value being loaded into the primary key column is 'NULL'.


Test1:
246 seconds to run Load Data Infile into a table (Table1) with 1 primary 
autoinc column, and 2 compound keys.


Test2:
 69 seconds to  run Load Data Infile into similar table (Table2) with no keys
111 seconds to rebuild the missing keys in Table2

69+111=180 seconds for Table2 compared to 246 seconds for Table1.

Now I thought when using Load Data Infile on an empty table it would 
rebuild *all* of the keys AFTER the data has been loaded. This may not be 
the case. I suspect the extra time for
Test1 is caused by the Load Data building the primary key as the data is 
being loaded.


Can someone confirm this?
If so, then when loading data into an empty table, it is always going to be 
faster to remove the keys then load the data, then add the keys.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



LOAD DATA INFILE Syntax error

2009-06-29 Thread Ralph Kutschera

Hallo List!

  I have a CVS file which i would like to import to MySQL.

The file header and an example:
| Page,Device,Group,ItemID,Item,Value
| Overview,General,Computer,513,OS,Linux


The table has:
| Create Table: CREATE TABLE `table` (
|   `ID` int(11) NOT NULL auto_increment,
|   `Page` varchar(128) default NULL,
|   `Device` varchar(128) default NULL,
|   `Group` varchar(128) default NULL,
|   `ItemID` varchar(128) default NULL,
|   `Item` varchar(128) default NULL,
|   `Value` varchar(128) default NULL,
|   PRIMARY KEY  (`ID`)
| ) ENGINE=MyISAM DEFAULT CHARSET=latin1


So I would like to import the first file field to the second table 
field, the second file field to the third table,... Just to have an index.


I'm using:
| LOAD DATA INFILE 'test.csv' INTO TABLE table
|   FIELDS TERMINATED BY ','
|   LINES STARTING BY '' TERMINATED BY '\n'
|   (Page, Device, GROUP , ItemID, Item, Value);


which gives me:
| #1064 - You have an error in your SQL syntax; check the manual that
| corresponds to your MySQL server version for the right syntax to use
| near 'Group, ItemID, Item, Value)' at line 2

I cannot find the error. Please help me!
MySQL version = 5.0.32-Debian_7etch8-log

TIA,
  Ralph


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: LOAD DATA INFILE Syntax error

2009-06-29 Thread Johnny Withers
Group is a keyword in mysql:

You need to put backticks around it in your statement:

| LOAD DATA INFILE 'test.csv' INTO TABLE table
|   FIELDS TERMINATED BY ','
|   LINES STARTING BY '' TERMINATED BY '\n'
|   (Page, Device, `GROUP` , ItemID, Item, Value);


On Mon, Jun 29, 2009 at 7:07 AM, Ralph Kutschera 
news2...@ecuapac.dyndns.org wrote:

 Hallo List!

  I have a CVS file which i would like to import to MySQL.

 The file header and an example:
 | Page,Device,Group,ItemID,Item,Value
 | Overview,General,Computer,513,OS,Linux


 The table has:
 | Create Table: CREATE TABLE `table` (
 |   `ID` int(11) NOT NULL auto_increment,
 |   `Page` varchar(128) default NULL,
 |   `Device` varchar(128) default NULL,
 |   `Group` varchar(128) default NULL,
 |   `ItemID` varchar(128) default NULL,
 |   `Item` varchar(128) default NULL,
 |   `Value` varchar(128) default NULL,
 |   PRIMARY KEY  (`ID`)
 | ) ENGINE=MyISAM DEFAULT CHARSET=latin1


 So I would like to import the first file field to the second table field,
 the second file field to the third table,... Just to have an index.

 I'm using:
 | LOAD DATA INFILE 'test.csv' INTO TABLE table
 |   FIELDS TERMINATED BY ','
 |   LINES STARTING BY '' TERMINATED BY '\n'
 |   (Page, Device, GROUP , ItemID, Item, Value);


 which gives me:
 | #1064 - You have an error in your SQL syntax; check the manual that
 | corresponds to your MySQL server version for the right syntax to use
 | near 'Group, ItemID, Item, Value)' at line 2

 I cannot find the error. Please help me!
 MySQL version = 5.0.32-Debian_7etch8-log

 TIA,
  Ralph


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: LOAD DATA INFILE Syntax error

2009-06-29 Thread Ralph Kutschera

Johnny Withers schrieb:

Group is a keyword in mysql:

You need to put backticks around it in your statement:

| LOAD DATA INFILE 'test.csv' INTO TABLE table
|   FIELDS TERMINATED BY ','
|   LINES STARTING BY '' TERMINATED BY '\n'
|   (Page, Device, `GROUP` , ItemID, Item, Value);


Ooookay. Thank you very much!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysqlimport/load data infile is using a temp file - why?

2008-08-21 Thread jthorpe

Hi,

I've been trying to import a 10G dump file using mysqlimport
and it is eventually failing because it runs out of tmpdir
space -- I get Errcode: 28.

I was surprised that it was using a temp file at all. I've
looked in the documentation and other sources but have not been
able to find anything about this file.

This is on v5.0.51.  The table is innodb and has 75 columns with
65% of them tinyint, 20% float, and the rest char,datetime,int.
The primary key is composite on two columns (int,char).
The data file that is being imported is on the database server.

I'll try this again and monitor the status parameters to see what
it is doing, but I was wondering if anyone already knows?

Thanks,
John

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqlimport/load data infile is using a temp file - why?

2008-08-21 Thread Ananda Kumar
Mysql use tmpdir,
when ever there is any index creation.

regards
anandkl


On 8/21/08, jthorpe [EMAIL PROTECTED] wrote:

 Hi,

 I've been trying to import a 10G dump file using mysqlimport
 and it is eventually failing because it runs out of tmpdir
 space -- I get Errcode: 28.

 I was surprised that it was using a temp file at all. I've
 looked in the documentation and other sources but have not been
 able to find anything about this file.

 This is on v5.0.51.  The table is innodb and has 75 columns with
 65% of them tinyint, 20% float, and the rest char,datetime,int.
 The primary key is composite on two columns (int,char).
 The data file that is being imported is on the database server.

 I'll try this again and monitor the status parameters to see what
 it is doing, but I was wondering if anyone already knows?

 Thanks,
 John

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: mysqlimport/load data infile is using a temp file - why?

2008-08-21 Thread Moon's Father
You should increase parameter named max_bulk_insert_buffer_size and
max_allowed_packet.

On 8/21/08, Ananda Kumar [EMAIL PROTECTED] wrote:

 Mysql use tmpdir,
 when ever there is any index creation.

 regards
 anandkl


 On 8/21/08, jthorpe [EMAIL PROTECTED] wrote:
 
  Hi,
 
  I've been trying to import a 10G dump file using mysqlimport
  and it is eventually failing because it runs out of tmpdir
  space -- I get Errcode: 28.
 
  I was surprised that it was using a temp file at all. I've
  looked in the documentation and other sources but have not been
  able to find anything about this file.
 
  This is on v5.0.51.  The table is innodb and has 75 columns with
  65% of them tinyint, 20% float, and the rest char,datetime,int.
  The primary key is composite on two columns (int,char).
  The data file that is being imported is on the database server.
 
  I'll try this again and monitor the status parameters to see what
  it is doing, but I was wondering if anyone already knows?
 
  Thanks,
  John
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 




-- 
I'm a MySQL DBA in china.
More about me just visit here:
http://yueliangdao0608.cublog.cn


Load data infile

2008-05-22 Thread Velen
Hi,

I would like to know if I can use the Load data infile to update a table on the 
server from a workstation?

I tried it but was unsuccessful.  Is there any other way to do this from a 
workstation?

Thanks.

Regards,


Velen

RE: Load data infile

2008-05-22 Thread Rolando Edwards
LOAD DATA LOCAL INFILE

http://dev.mysql.com/doc/refman/5.0/en/load-data.html

http://www.mysql.com/news-and-events/newsletter/2002-05/a12.html




-Original Message-
From: Velen [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 22, 2008 2:24 PM
To: mysql@lists.mysql.com
Subject: Load data infile

Hi,

I would like to know if I can use the Load data infile to update a table on the 
server from a workstation?

I tried it but was unsuccessful.  Is there any other way to do this from a 
workstation?

Thanks.

Regards,


Velen

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqlimport load data infile

2008-03-18 Thread Hiep Nguyen
i read about mysqlimport  load data infile for mysql, but i can't find a 
way to import text file using length of column, instead of delimiter


my text file contains fixed length column:

--


i can use ms excel to convert all files to .csv format and import, but it 
would take a long time and i have to escape delimiter.


so, is there a way to import text file with fixed column size into 
mysql???


thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



load data infile - fails to load my db2 del (ascii) file

2007-11-06 Thread lanes

hi all,


right now i'm trying to migrate from db2 running under linux to mysql v5.1.

i manage to export out the db2 structure  data into a del (ascii) file.
but when i try to load the data from the del file to mysql table, it
generate an error.

below is the load data infile syntax i use =
LOAD DATA INFILE 'C:\\Migration\\del\\TABLE01.del' INTO TABLE TABLE01 FIELDS
TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n';


and below is the sample layout of del file (in the above case is
TABLE01.del) =
1,0,2007-07-31 00:25:12
2,0,2007-07-31 14:09:00
3,0,2007-07-31 00:00:00
4,0,2007-07-31 00:00:00
5,0,2007-07-31 00:00:00


and below is the TABLE01 structure =
FieldName, Type, Null, Primary
--
MY_TABLEID, int(11), NO, PRIMARY KEY
CNTS, int(11), NO, NOT PRIMARY KEY
INSERT_DATE, datetime, NO, PRIMARY KEY


the error i encounter is =
ERROR 1292 (22007): Incorrect datetime value: '2007-07-31 00:25:12.


i do some trick by trying to remove the qoute sign from 2007-07-31
00:25:12 so the del file become =
1,0,2007-07-31 00:25:12
2,0,2007-07-31 14:09:00
3,0,2007-07-31 00:00:00
4,0,2007-07-31 00:00:00
5,0,2007-07-31 00:00:00

And i do load data infile again with same command =
LOAD DATA INFILE 'C:\\Migration\\del\\TABLE01.del' INTO TABLE TABLE01 FIELDS
TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n';
and it was successful.


to me it is very strange, since i can do a sql insert using 2007-07-31
00:25:12 for example: insert into TABLE01 values (11, 22, 2007-07-31
00:25:12) successfully without any error!!!
in fact, when i use a MySQL Administrator tools, and i export out the data
from TABLE01 into CSV, then i found that the structure is exactly the same
with del file generated by db2.

do i make any mistake??

or do i need to initialize something before i call
LOAD DATA INFILE 'C:\\Migration\\del\\TABLE01.del' INTO TABLE TABLE01 FIELDS
TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n'
...?


Any help is appreciated.
Thank you.


Regards,
Lanes
-- 
View this message in context: 
http://www.nabble.com/load-data-infile---fails-to-load-my-db2-del-%28ascii%29-file-tf4762331.html#a13620281
Sent from the MySQL - General mailing list archive at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data infile and character set

2007-10-29 Thread Ananda Kumar
Hi,
Try this.

set session collation_database=latin1_swedish_ci;
set session character_set_database=latin1;

regards
anandkl


On 10/29/07, Dušan Pavlica [EMAIL PROTECTED] wrote:

 Are you sure your file is coded in utf8? Character set of your file must
 be same as charset of your database.

 Dusan

 Caleb Racey napsal(a):
  Does anyone know how to get the load data infile command to load utf8
 data?
 
  I have setup a database as utf8 with a collation of utf8_general_ci,
 the
  mysqld server is started with --character-set-server=utf8. Server
 variables
  say character_set_database = utf8. I use the sql below
 
  LOAD DATA INFILE 'filename' REPLACE INTO TABLE db.table CHARACTER SET
 utf8
  FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
 
  Yet when i try to load a file with an accented value in it e.g.
 Agustín
  the value gets truncated to Agust
 
  anyone got any solutions to this?
 
  Regards
 
  Cal
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: load data infile and character set

2007-10-29 Thread Dušan Pavlica
Are you sure your file is coded in utf8? Character set of your file must 
be same as charset of your database.


Dusan

Caleb Racey napsal(a):

Does anyone know how to get the load data infile command to load utf8 data?

I have setup a database as utf8 with a collation of utf8_general_ci,   the
mysqld server is started with --character-set-server=utf8. Server variables
say character_set_database = utf8. I use the sql below

LOAD DATA INFILE 'filename' REPLACE INTO TABLE db.table CHARACTER SET utf8
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'

Yet when i try to load a file with an accented value in it e.g. Agustín
the value gets truncated to Agust

anyone got any solutions to this?

Regards

Cal

  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data infile and character set

2007-10-29 Thread mysql

Ananda Kumar wrote:

Hi,
Try this.

set session collation_database=latin1_swedish_ci;
set session character_set_database=latin1;



Rather:

set session collation_database=utf8_general_ci;
set session character_set_database=utf8;

Also, make sure you have these in my.cnf:

[client]
default-character-set=utf8

[mysqld]
default-character-set=utf8
init-connect='SET NAMES utf8'

And you might want to add these also:

collation_server=utf8_unicode_ci
character_set_server=utf8

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



load data infile and character set

2007-10-26 Thread Caleb Racey
Does anyone know how to get the load data infile command to load utf8 data?

I have setup a database as utf8 with a collation of utf8_general_ci,   the
mysqld server is started with --character-set-server=utf8. Server variables
say character_set_database = utf8. I use the sql below

LOAD DATA INFILE 'filename' REPLACE INTO TABLE db.table CHARACTER SET utf8
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'

Yet when i try to load a file with an accented value in it e.g. Agustín
the value gets truncated to Agust

anyone got any solutions to this?

Regards

Cal


Re: load data infile and character set

2007-10-26 Thread Baron Schwartz

Caleb Racey wrote:

Does anyone know how to get the load data infile command to load utf8 data?

I have setup a database as utf8 with a collation of utf8_general_ci,   the
mysqld server is started with --character-set-server=utf8. Server variables
say character_set_database = utf8. I use the sql below

LOAD DATA INFILE 'filename' REPLACE INTO TABLE db.table CHARACTER SET utf8
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'

Yet when i try to load a file with an accented value in it e.g. Agustín
the value gets truncated to Agust

anyone got any solutions to this?


It is indeed buggy and badly documented.  It depends on the current 
database's character set instead.  Try this:


SET NAMES utf8;
SET character_set_database=utf8;
LOAD DATA INFILE...

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data infile and character set

2007-10-26 Thread mysql

Caleb Racey wrote:

Does anyone know how to get the load data infile command to load utf8 data?

I have setup a database as utf8 with a collation of utf8_general_ci,   the
mysqld server is started with --character-set-server=utf8. Server variables
say character_set_database = utf8. I use the sql below

LOAD DATA INFILE 'filename' REPLACE INTO TABLE db.table CHARACTER SET utf8
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'

Yet when i try to load a file with an accented value in it e.g. Agustín
the value gets truncated to Agust


If you haven't already, issue this first:

SET CHARACTER SET utf8;
SET NAMES utf8;
LOAD DATA INFILE ...

b

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data infile and character set

2007-10-26 Thread Baron Schwartz

Caleb Racey wrote:

On 10/26/07, Baron Schwartz [EMAIL PROTECTED] wrote:

Caleb Racey wrote:

It is indeed buggy and badly documented.  It depends on the current
database's character set instead.  Try this:

SET NAMES utf8;
SET character_set_database=utf8;
LOAD DATA INFILE...

Baron


Thanks for the suggestion

I'm afraid i get the same behaviour when i try this approach accented

values

still truncate at the accent when i use load data infile.


OK, the next suggestion is to use the 'binary' character set.  By the
way, I accidentally omitted quotes above; I should have typed

SET character_set_database='utf8';

You should verify your connection's character sets with

SHOW VARIABLES LIKE '%character%';

Try it with 'binary' and see if that works.

Baron


thanks again
I'm afraid 'binary' achieved the  same result  as did using utf8 with
quotes.


I'm out of ideas, then.  I wish I could help more.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



data position changes when 'Load Data infile'......

2007-09-22 Thread ars k
Hi Friend,
Today I was testing the command 'Load data infile ...' command (
http://dev.mysql.com/doc/refman/5.0/en/loading-tables.html ) in my system.
That time I was surprised when I put select statement in that table. The
scenario as follows :

In a text file which is to be loaded, I am having data as follows:
3   v,4  a

mysql desc mytable;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| id| int(11) |  | PRI | NULL| auto_increment |
| foo   | char(1) | YES  | | NULL||
+---+-+--+-+-++
2 rows in set (0.00 sec)

mysql select * from mytable;
++--+
| id | foo  |
++--+
|  1 | a|
|  2 | b|
++--+
2 rows in set (0.01 sec)

mysql load data infile '/home/mysql/egdata' into table mytable lines
terminated by ',';
Query OK, 2 rows affected, 0 warnings (0.00 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 0

mysql select * from mytable;
++--+
| id | foo  |
++--+
|  1 | a|
|  2 | b|
|  3 | v|
|  4 | a|
++--+
4 rows in set (0.00 sec)

 mysql delete from mytable where id in (3,4);
Query OK, 2 rows affected (0.00 sec)

mysql load data infile '/home/mysql/egdata' into table mytable lines
terminated by ',';
Query OK, 2 rows affected, 0 warnings (0.00 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 0

mysql select * from mytable;
++--+
| id | foo  |
++--+
|  1 | a|
|  2 | b|
|  4 | a|
|  3 | v|
++--+
4 rows in set (0.00 sec)

The select query gives the different orders for value '3' and '4' which was
loaded twice with same txt file. Why is this happening like this? Any reason
or algorithm involve in this?


Re: LOAD DATA INFILE

2007-06-28 Thread mos

At 01:08 PM 6/27/2007, you wrote:

Hi,



I have a question on LOAD DATA INFILE command.

I have a table with 10 columns. Is it possible to update only few
columns of this table using LOAD DATA INFILE?


No. As you found out  Load Data loads the specified columns and sets the 
other columns to NULL. The only way around this is to write the current 
column values that you want to save back out to the text file. Or load the 
data into a second temporary table with just the columns you need, and then 
do an Update to the original table (assuming of course all of the data in 
the text file are updates and not new rows otherwise you'll then have to 
join the original table with the temp table to find the new rows then add 
those to the original table.) Whew!


Mike
(If programming was easy, everyone would be doing it.) 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE

2007-06-28 Thread Ananda Kumar

Or,
Load the data into  a temp , and create necessary index on the required
columns,
Write a stored proc to update columns in the original table with values from
the temp table based on key columns joins between both tables

regards
anandkl


On 6/28/07, mos [EMAIL PROTECTED] wrote:


At 01:08 PM 6/27/2007, you wrote:
Hi,



I have a question on LOAD DATA INFILE command.

I have a table with 10 columns. Is it possible to update only few
columns of this table using LOAD DATA INFILE?

No. As you found out  Load Data loads the specified columns and sets the
other columns to NULL. The only way around this is to write the current
column values that you want to save back out to the text file. Or load the
data into a second temporary table with just the columns you need, and
then
do an Update to the original table (assuming of course all of the data in
the text file are updates and not new rows otherwise you'll then have to
join the original table with the temp table to find the new rows then add
those to the original table.) Whew!

Mike
(If programming was easy, everyone would be doing it.)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: need help...LOAD DATA INFILE

2007-05-21 Thread Ananda Kumar

Hi All,
I was able to load data properly.
We need to set the parameter

set session collation_database=latin1_swedish_ci;
set session character_set_database=latin1;

please check this url for more info

http://lists.mysql.com/commits/16915


On 5/19/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I am loading data into mysql database using LOAD DATA INFILE. After the
load i am getting this error.

Warning | 1366 | Incorrect string value: '\xE9cor' for column
'CATEGORY_NAME' at row 2137 .

My database character set is as below

character_set_database   | utf8
 character_set_client | latin1 |
| character_set_connection | latin1
 character_set_server | utf8   |
| character_set_system | utf8

and collation information is

 collation_connection | latin1_swedish_ci |
| collation_database   | utf8_general_ci   |
| collation_server | utf8_general_ci

I have spooled this data from oracle and trying to insert this data into
mysql. Can you please set me know how should i fix this error.

The actual value in oracle database is Other Home Décor, but in the
spool file its coming as some Other Home D\351cor. Can you please let me
know how i can fix this error.

regards
anandkl





need help...LOAD DATA INFILE

2007-05-19 Thread Ananda Kumar

Hi All,
I am loading data into mysql database using LOAD DATA INFILE. After the load
i am getting this error.

Warning | 1366 | Incorrect string value: '\xE9cor' for column
'CATEGORY_NAME' at row 2137 .

My database character set is as below

character_set_database   | utf8
character_set_client | latin1 |
| character_set_connection | latin1
character_set_server | utf8   |
| character_set_system | utf8

and collation information is

collation_connection | latin1_swedish_ci |
| collation_database   | utf8_general_ci   |
| collation_server | utf8_general_ci

I have spooled this data from oracle and trying to insert this data into
mysql. Can you please set me know how should i fix this error.

The actual value in oracle database is Other Home Décor, but in the spool
file its coming as some Other Home D\351cor. Can you please let me know
how i can fix this error.

regards
anandkl


Re: need help...LOAD DATA INFILE

2007-05-19 Thread Ananda Kumar

Also friends,
When i see this data in the vi editor it looks like this  Other Home DÃ(c)cor

Just wanted to give information. Please help me.

regards
anandkl


On 5/19/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I am loading data into mysql database using LOAD DATA INFILE. After the
load i am getting this error.

Warning | 1366 | Incorrect string value: '\xE9cor' for column
'CATEGORY_NAME' at row 2137 .

My database character set is as below

character_set_database   | utf8
 character_set_client | latin1 |
| character_set_connection | latin1
 character_set_server | utf8   |
| character_set_system | utf8

and collation information is

 collation_connection | latin1_swedish_ci |
| collation_database   | utf8_general_ci   |
| collation_server | utf8_general_ci

I have spooled this data from oracle and trying to insert this data into
mysql. Can you please set me know how should i fix this error.

The actual value in oracle database is Other Home Décor, but in the
spool file its coming as some Other Home D\351cor. Can you please let me
know how i can fix this error.

regards
anandkl





Replication LOAD DATA INFILE

2007-02-02 Thread Jesse
I've just performed a LOAD DATA INFILE on the master server, and I've waited 
a while now, and the data has not shown up in the SLAVE.  Does Replication 
not handle LOAD DATA INFILE?


Jesse 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication LOAD DATA INFILE

2007-02-02 Thread Brown, Charles
During Replication the SLAVE reads data from its MASTER's log. Chances
are LOAD DATA gets no logging -- I won't be surprised.


-Original Message-
From: Jesse [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 02, 2007 9:44 AM
To: MySQL List
Subject: Replication  LOAD DATA INFILE

I've just performed a LOAD DATA INFILE on the master server, and I've
waited 
a while now, and the data has not shown up in the SLAVE.  Does
Replication 
not handle LOAD DATA INFILE?

Jesse 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: change format of date fields during LOAD DATA INFILE?

2006-10-25 Thread Paul DuBois

At 21:39 -0400 10/14/06, Ferindo Middleton wrote:

Is there a way to change the format of date fields MySQL is expecting when
LOADing data from a file? I have no problem with the format MySQL saves the
date but most spreadsheet programs I use don't make it easy to export text
files with date fields in the format -MM-DD even if I formated the field
that way on-screen.

It would be great if you could tell MySQL on the command line to expect
dates in the format Month/Day/Year or something like that and be able to
interpret that and convert the date to the format it's expecting on the fly.


If you're using MySQL 5.0 or higher, you can read a column into a user
variable and use SET to reformat the column value before inserting it
into the table.  Example:

LOAD DATA LOCAL INFILE 'newdata.txt'
INTO TABLE t (name,@date,value)
SET date = STR_TO_DATE(@date,'%m/%d/%y');

The format string depends on the format of your input data, of course.

http://dev.mysql.com/doc/refman/5.0/en/load-data.html

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: change format of date fields during LOAD DATA INFILE?

2006-10-16 Thread Jerry Schwartz
I just tested it with Excel, as it will save the date as seen if you save
the worksheet to a text file. I do this quite a bit, actually, to put
spreadsheet data into MySQL. Often I use Excel macros to construct entire
UPDATE or INSERT statements, and save those into a text file for MySQL to
inhale.

I can't speak for OpenOffice.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: Ferindo Middleton [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 14, 2006 9:40 PM
 To: mysql
 Subject: change format of date fields during LOAD DATA INFILE?

 Is there a way to change the format of date fields MySQL is
 expecting when
 LOADing data from a file? I have no problem with the format
 MySQL saves the
 date but most spreadsheet programs I use don't make it easy
 to export text
 files with date fields in the format -MM-DD even if I
 formated the field
 that way on-screen.

 It would be great if you could tell MySQL on the command line
 to expect
 dates in the format Month/Day/Year or something like that and
 be able to
 interpret that and convert the date to the format it's
 expecting on the fly.

 --
 Ferindo





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: change format of date fields during LOAD DATA INFILE?

2006-10-16 Thread Ferindo Middleton

I was using OpenOffice... And I couldn't get it to keep the format
-mm-dd I saw on screen in that format when I went to save it as a text
file I was able to I suppose this should be reported to their developers
as an enhancement.

There's no way to get MySQL to accept dates in a different format when
performing the operation on the command line though?

Ferindo

On 10/16/06, Jerry Schwartz [EMAIL PROTECTED] wrote:


I just tested it with Excel, as it will save the date as seen if you save
the worksheet to a text file. I do this quite a bit, actually, to put
spreadsheet data into MySQL. Often I use Excel macros to construct entire
UPDATE or INSERT statements, and save those into a text file for MySQL to
inhale.

I can't speak for OpenOffice.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: Ferindo Middleton [mailto:[EMAIL PROTECTED] ]
 Sent: Saturday, October 14, 2006 9:40 PM
 To: mysql
 Subject: change format of date fields during LOAD DATA INFILE?

 Is there a way to change the format of date fields MySQL is
 expecting when
 LOADing data from a file? I have no problem with the format
 MySQL saves the
 date but most spreadsheet programs I use don't make it easy
 to export text
 files with date fields in the format -MM-DD even if I
 formated the field
 that way on-screen.

 It would be great if you could tell MySQL on the command line
 to expect
 dates in the format Month/Day/Year or something like that and
 be able to
 interpret that and convert the date to the format it's
 expecting on the fly.

 --
 Ferindo







Re: change format of date fields during LOAD DATA INFILE?

2006-10-15 Thread mos

At 08:39 PM 10/14/2006, Ferindo Middleton wrote:

Is there a way to change the format of date fields MySQL is expecting when
LOADing data from a file? I have no problem with the format MySQL saves the
date but most spreadsheet programs I use don't make it easy to export text
files with date fields in the format -MM-DD even if I formated the field
that way on-screen.

It would be great if you could tell MySQL on the command line to expect
dates in the format Month/Day/Year or something like that and be able to
interpret that and convert the date to the format it's expecting on the fly.

--
Ferindo


Ferindo,
If you don't want to change the input file to the proper date 
format, then you'll need to read the data into a temporary table and 
manipulate the string date into a MySQL date '-mm-dd'. I belive MaxDb 
has the ability to change the date format before loading data. There used 
to be a page where you could submit suggestion but I was only able to come 
up with this one: http://www.mysql.com/company/contact/. I think MySQL AB 
deliberately hides the suggestions page.g There is also a comment by 
Remco Wendt at http://dev.mysql.com/doc/refman/5.0/en/load-data.html which 
shows you how to load European dates that may be of help to you.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



change format of date fields during LOAD DATA INFILE?

2006-10-14 Thread Ferindo Middleton

Is there a way to change the format of date fields MySQL is expecting when
LOADing data from a file? I have no problem with the format MySQL saves the
date but most spreadsheet programs I use don't make it easy to export text
files with date fields in the format -MM-DD even if I formated the field
that way on-screen.

It would be great if you could tell MySQL on the command line to expect
dates in the format Month/Day/Year or something like that and be able to
interpret that and convert the date to the format it's expecting on the fly.

--
Ferindo


sorting datafile for load data infile

2006-08-18 Thread George Law
Hello All,

An ex-coworker of my recently sent over a link to a mysql article about
speeding up mass imports.
unfortunately I deleted the email before I had time to actually digest
it and now I am wondering if it could help with my imports.

The article suggested sorting the data ahead of time to prevent mysql
from having to jump around as much.
What I have is a raw data file that I pre-process to create 3 separate
tab delimited files, which are then loaded into mysql using load data
infile...


I am working with a table with a primary key consisting of 2 fields.
| cdr_seq_no  | int(9) unsigned  | NO   | PRI |
0   |   |
| callid  | char(33) | NO   | PRI |
|   |


show index from comp_cdr;

+--++--+--+-
-+---+-+--++--++
-+
| Table| Non_unique | Key_name | Seq_in_index | Column_name
| Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
+--++--+--+-
-+---+-+--++--++
-+
| comp_cdr |  0 | PRIMARY  |1 | callid
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
| comp_cdr |  0 | PRIMARY  |2 | cdr_seq_no
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
...

some sample data is :
| callid   | cdr_seq_no |
+--++
| 002644e5f21f4abaa204120fad41 |   89630624 |
| 004644e5f3db5af88248020fad41 |   89627398 |
| 008344e5ef975e6eafe0020fad41 |   89630801 |
| 009a44e5f2694aabb6cc12115a45 |   89614941 |
| 00e044e5f2e94dd45074020fad41 |   89619029 |
| 0007bb54c8691110800100201c0060ff |   89616484 |
| 002a401ae58711d382f98183346766e7 |   89621314 |
| 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
| 008724bcc7691110800100201c0144ff |   89628873 |
| 008bbf9dc9691110800100201c00edff |   89627287 |

Any suggestions on exactly how to sort those fields before 


Thanks!!!

--
George Law
[EMAIL PROTECTED]
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sorting datafile for load data infile

2006-08-18 Thread Dan Buettner

George, I've not heard of that technique - sorting so that mysql
doesn't have to jump around as much.

What I am aware of, and what can make a difference, is disabling
indexes prior to starting the load.  Your load goes quite a bit
faster, in general, but then there is some time at the spent updating
indexes.   Overall it can be faster.  From
http://dev.mysql.com/doc/refman/5.0/en/load-data.html :


If you use LOAD DATA INFILE on an empty MyISAM table, all non-unique
indexes are created in a separate batch (as for REPAIR TABLE).
Normally, this makes LOAD DATA INFILE much faster when you have many
indexes. In some extreme cases, you can create the indexes even faster
by turning them off with ALTER TABLE ... DISABLE KEYS before loading
the file into the table and using ALTER TABLE ... ENABLE KEYS to
re-create the indexes after loading the file. See Section 7.2.16,
Speed of INSERT Statements.


Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:

Hello All,

An ex-coworker of my recently sent over a link to a mysql article about
speeding up mass imports.
unfortunately I deleted the email before I had time to actually digest
it and now I am wondering if it could help with my imports.

The article suggested sorting the data ahead of time to prevent mysql
from having to jump around as much.
What I have is a raw data file that I pre-process to create 3 separate
tab delimited files, which are then loaded into mysql using load data
infile...


I am working with a table with a primary key consisting of 2 fields.
| cdr_seq_no  | int(9) unsigned  | NO   | PRI |
0   |   |
| callid  | char(33) | NO   | PRI |
|   |


show index from comp_cdr;

+--++--+--+-
-+---+-+--++--++
-+
| Table| Non_unique | Key_name | Seq_in_index | Column_name
| Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
+--++--+--+-
-+---+-+--++--++
-+
| comp_cdr |  0 | PRIMARY  |1 | callid
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
| comp_cdr |  0 | PRIMARY  |2 | cdr_seq_no
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
...

some sample data is :
| callid   | cdr_seq_no |
+--++
| 002644e5f21f4abaa204120fad41 |   89630624 |
| 004644e5f3db5af88248020fad41 |   89627398 |
| 008344e5ef975e6eafe0020fad41 |   89630801 |
| 009a44e5f2694aabb6cc12115a45 |   89614941 |
| 00e044e5f2e94dd45074020fad41 |   89619029 |
| 0007bb54c8691110800100201c0060ff |   89616484 |
| 002a401ae58711d382f98183346766e7 |   89621314 |
| 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
| 008724bcc7691110800100201c0144ff |   89628873 |
| 008bbf9dc9691110800100201c00edff |   89627287 |

Any suggestions on exactly how to sort those fields before


Thanks!!!

--
George Law
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: sorting datafile for load data infile

2006-08-18 Thread George Law
his exact email:

As you know, when you are doing mass inserts of millions of
rows of data it can take hours.  Well, these guys said that if you sort
the rows of data 1st by the information that will be inserted in to the
database primary key before you do the inserts then the total insert
time will take a fraction of the time.  The reason being that then the
database doesn't have to jump back and forth in the TREE structure to
insert each row of data.  One row will be inserted immediately after the
previous row that was inserted and so it takes a lot less database
processing time/overhead.



Makes sense!  I thought you might be interested in this theory.
They claim it makes a world of difference!


Now I know he references doing straight inserts, not using load data,
so I am working on rewriting my code to do inserts.


I am working on loading everything into hashes in perl, keyed off the
callid field.  
then, realistically, I should be able to sort the hash on the index, and
process the inserts with the sorted data.



-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Friday, August 18, 2006 1:40 PM
To: George Law
Cc: MYSQL General List
Subject: Re: sorting datafile for load data infile

George, I've not heard of that technique - sorting so that mysql
doesn't have to jump around as much.

What I am aware of, and what can make a difference, is disabling
indexes prior to starting the load.  Your load goes quite a bit
faster, in general, but then there is some time at the spent updating
indexes.   Overall it can be faster.  From
http://dev.mysql.com/doc/refman/5.0/en/load-data.html :


If you use LOAD DATA INFILE on an empty MyISAM table, all non-unique
indexes are created in a separate batch (as for REPAIR TABLE).
Normally, this makes LOAD DATA INFILE much faster when you have many
indexes. In some extreme cases, you can create the indexes
even faster
by turning them off with ALTER TABLE ... DISABLE KEYS before loading
the file into the table and using ALTER TABLE ... ENABLE KEYS to
re-create the indexes after loading the file. See Section 7.2.16,
Speed of INSERT Statements.


Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:
 Hello All,

 An ex-coworker of my recently sent over a link to a mysql
article about
 speeding up mass imports.
 unfortunately I deleted the email before I had time to
actually digest
 it and now I am wondering if it could help with my imports.

 The article suggested sorting the data ahead of time to
prevent mysql
 from having to jump around as much.
 What I have is a raw data file that I pre-process to
create 3 separate
 tab delimited files, which are then loaded into mysql
using load data
 infile...


 I am working with a table with a primary key consisting of
2 fields.
 | cdr_seq_no  | int(9) unsigned  |
NO   | PRI |
 0   |   |
 | callid  | char(33) |
NO   | PRI |
 |   |


 show index from comp_cdr;


+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | Table| Non_unique | Key_name | Seq_in_index
| Column_name
 | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
 Comment |

+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | comp_cdr |  0 | PRIMARY  |1 | callid
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 | comp_cdr |  0 | PRIMARY  |2
| cdr_seq_no
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 ...

 some sample data is :
 | callid   | cdr_seq_no |
 +--++
 | 002644e5f21f4abaa204120fad41 |   89630624 |
 | 004644e5f3db5af88248020fad41 |   89627398 |
 | 008344e5ef975e6eafe0020fad41 |   89630801 |
 | 009a44e5f2694aabb6cc12115a45 |   89614941 |
 | 00e044e5f2e94dd45074020fad41 |   89619029 |
 | 0007bb54c8691110800100201c0060ff |   89616484 |
 | 002a401ae58711d382f98183346766e7 |   89621314 |
 | 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
 | 008724bcc7691110800100201c0144ff |   89628873 |
 | 008bbf9dc9691110800100201c00edff |   89627287 |

 Any suggestions on exactly how to sort those fields before


 Thanks!!!

 --
 George Law
 [EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED]


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sorting datafile for load data infile

2006-08-18 Thread Dan Buettner

George, for raw speed into a MyISAM table, I think you'll find it hard
to beat LOAD DATA INFILE, especially if you disable keys before and
re-enable afterwards (which is not unlike what your friend proposes -
creating the index in a more efficient fashion).  I'd be interested to
hear how you get on with perl vs. LOAD DATA INFILE, if you do any
comparative benchmarks.

Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:

his exact email:

As you know, when you are doing mass inserts of millions of
rows of data it can take hours.  Well, these guys said that if you sort
the rows of data 1st by the information that will be inserted in to the
database primary key before you do the inserts then the total insert
time will take a fraction of the time.  The reason being that then the
database doesn't have to jump back and forth in the TREE structure to
insert each row of data.  One row will be inserted immediately after the
previous row that was inserted and so it takes a lot less database
processing time/overhead.



Makes sense!  I thought you might be interested in this theory.
They claim it makes a world of difference!


Now I know he references doing straight inserts, not using load data,
so I am working on rewriting my code to do inserts.


I am working on loading everything into hashes in perl, keyed off the
callid field.
then, realistically, I should be able to sort the hash on the index, and
process the inserts with the sorted data.



-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Friday, August 18, 2006 1:40 PM
To: George Law
Cc: MYSQL General List
Subject: Re: sorting datafile for load data infile

George, I've not heard of that technique - sorting so that mysql
doesn't have to jump around as much.

What I am aware of, and what can make a difference, is disabling
indexes prior to starting the load.  Your load goes quite a bit
faster, in general, but then there is some time at the spent updating
indexes.   Overall it can be faster.  From
http://dev.mysql.com/doc/refman/5.0/en/load-data.html :


If you use LOAD DATA INFILE on an empty MyISAM table, all non-unique
indexes are created in a separate batch (as for REPAIR TABLE).
Normally, this makes LOAD DATA INFILE much faster when you have many
indexes. In some extreme cases, you can create the indexes
even faster
by turning them off with ALTER TABLE ... DISABLE KEYS before loading
the file into the table and using ALTER TABLE ... ENABLE KEYS to
re-create the indexes after loading the file. See Section 7.2.16,
Speed of INSERT Statements.


Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:
 Hello All,

 An ex-coworker of my recently sent over a link to a mysql
article about
 speeding up mass imports.
 unfortunately I deleted the email before I had time to
actually digest
 it and now I am wondering if it could help with my imports.

 The article suggested sorting the data ahead of time to
prevent mysql
 from having to jump around as much.
 What I have is a raw data file that I pre-process to
create 3 separate
 tab delimited files, which are then loaded into mysql
using load data
 infile...


 I am working with a table with a primary key consisting of
2 fields.
 | cdr_seq_no  | int(9) unsigned  |
NO   | PRI |
 0   |   |
 | callid  | char(33) |
NO   | PRI |
 |   |


 show index from comp_cdr;


+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | Table| Non_unique | Key_name | Seq_in_index
| Column_name
 | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
 Comment |

+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | comp_cdr |  0 | PRIMARY  |1 | callid
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 | comp_cdr |  0 | PRIMARY  |2
| cdr_seq_no
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 ...

 some sample data is :
 | callid   | cdr_seq_no |
 +--++
 | 002644e5f21f4abaa204120fad41 |   89630624 |
 | 004644e5f3db5af88248020fad41 |   89627398 |
 | 008344e5ef975e6eafe0020fad41 |   89630801 |
 | 009a44e5f2694aabb6cc12115a45 |   89614941 |
 | 00e044e5f2e94dd45074020fad41 |   89619029 |
 | 0007bb54c8691110800100201c0060ff |   89616484 |
 | 002a401ae58711d382f98183346766e7 |   89621314 |
 | 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
 | 008724bcc7691110800100201c0144ff |   89628873 |
 | 008bbf9dc9691110800100201c00edff |   89627287 |

 Any suggestions on exactly how to sort those fields before


 Thanks!!!

 --
 George Law
 [EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http

RE: sorting datafile for load data infile

2006-08-18 Thread George Law
Dan,

Dan,

Right now I am using load data to load into my database.

I am inserting 10,000 - 20,000 rows (X3) every 10 minutes. 

One table currently has 17866472 rows

Just using the date command from bash, the rough estimates for the
inserts via load data... are :

2006-08-18 15:39:23 : begin import into table1
2006-08-18 15:41:33 : import into table1 records: (18183) deleted:(0)
skipped: (0) warnings:(0)

130 seconds for 18182 rows... as the number or rows in the table
increases, this import time increases.
Every once in a while I rotate this table out and when doing the same
type of 'load data', it takes  1 second.  

I have tried disabling the keys, but if I remember correctly, it took
just as long if not longer.  
This table has a primary key based on the 2 fields I described earlier,
plus indexes on 4 other fields.  
All alphanumeric fields are fixed width char fields.  

So, I am eager to see if this sorting idea helps any.

--
George

 

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 18, 2006 3:53 PM
To: George Law
Cc: MYSQL General List
Subject: Re: sorting datafile for load data infile

George, for raw speed into a MyISAM table, I think you'll 
find it hard
to beat LOAD DATA INFILE, especially if you disable keys before and
re-enable afterwards (which is not unlike what your friend proposes -
creating the index in a more efficient fashion).  I'd be 
interested to
hear how you get on with perl vs. LOAD DATA INFILE, if you do any
comparative benchmarks.

Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:
 his exact email:

 As you know, when you are doing mass inserts of 
millions of
 rows of data it can take hours.  Well, these guys said 
that if you sort
 the rows of data 1st by the information that will be 
inserted in to the
 database primary key before you do the inserts then the 
total insert
 time will take a fraction of the time.  The reason being 
that then the
 database doesn't have to jump back and forth in the TREE 
structure to
 insert each row of data.  One row will be inserted 
immediately after the
 previous row that was inserted and so it takes a lot less database
 processing time/overhead.



 Makes sense!  I thought you might be interested in 
this theory.
 They claim it makes a world of difference!


 Now I know he references doing straight inserts, not using 
load data,
 so I am working on rewriting my code to do inserts.


 I am working on loading everything into hashes in perl, 
keyed off the
 callid field.
 then, realistically, I should be able to sort the hash on 
the index, and
 process the inserts with the sorted data.



 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 18, 2006 1:40 PM
 To: George Law
 Cc: MYSQL General List
 Subject: Re: sorting datafile for load data infile
 
 George, I've not heard of that technique - sorting so 
that mysql
 doesn't have to jump around as much.
 
 What I am aware of, and what can make a difference, is disabling
 indexes prior to starting the load.  Your load goes quite a bit
 faster, in general, but then there is some time at the 
spent updating
 indexes.   Overall it can be faster.  From
 http://dev.mysql.com/doc/refman/5.0/en/load-data.html :
 
 
 If you use LOAD DATA INFILE on an empty MyISAM table, 
all non-unique
 indexes are created in a separate batch (as for REPAIR TABLE).
 Normally, this makes LOAD DATA INFILE much faster when 
you have many
 indexes. In some extreme cases, you can create the indexes
 even faster
 by turning them off with ALTER TABLE ... DISABLE KEYS 
before loading
 the file into the table and using ALTER TABLE ... ENABLE KEYS to
 re-create the indexes after loading the file. See 
Section 7.2.16,
 Speed of INSERT Statements.
 
 
 Dan
 
 
 On 8/18/06, George Law [EMAIL PROTECTED] wrote:
  Hello All,
 
  An ex-coworker of my recently sent over a link to a mysql
 article about
  speeding up mass imports.
  unfortunately I deleted the email before I had time to
 actually digest
  it and now I am wondering if it could help with my imports.
 
  The article suggested sorting the data ahead of time to
 prevent mysql
  from having to jump around as much.
  What I have is a raw data file that I pre-process to
 create 3 separate
  tab delimited files, which are then loaded into mysql
 using load data
  infile...
 
 
  I am working with a table with a primary key consisting of
 2 fields.
  | cdr_seq_no  | int(9) unsigned  |
 NO   | PRI |
  0   |   |
  | callid  | char(33) |
 NO   | PRI |
  |   |
 
 
  show index from comp_cdr;
 
 
 +--++--+--+--
 ---
 
 -+---+-+--++--+--
 --+
  -+
  | Table| Non_unique | Key_name | Seq_in_index
 | Column_name
  | Collation | Cardinality | Sub_part | Packed | Null 
| Index_type

Re: Load Data Infile and newlines

2006-08-08 Thread Gerald L. Clark

Mark Nienberg wrote:
I'm trying to use LOAD DATA INFILE to read a csv formatted file into a 
mysql 5.0.22 table.  Some of the fields contain text that has newline 
characters in it.  After reading the manual to learn how special 
characters are treated, I altered the csv file so newlines are 
represented by '\\n', that is two backslashes followed by a lowercase n.


After loading the file, I find that the text fields do indeed contain 
'\n' as I hoped (one of the backslashes was stripped during import).  
But for some reason these newlines aren't treated as such by mysql or 
php.  For example, the php function nl2br does not recognize and 
substitute them.


I have other similar tables in the same database that I successfully 
constructed using the same method, but that was with a previous version 
of mysql (4.something). Something is different now.


The command I used was:

mysql load data infile '/share/store/library.csv' into table library
fields terminated by ','
enclosed by ''
lines terminated by '\r\n';

Any help appreciated.
Mark



You need to quote the actual linefeed character, not a backslash n.

--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Load Data Infile and newlines

2006-08-08 Thread Mark Nienberg

Gerald L. Clark wrote:

Mark Nienberg wrote:
I'm trying to use LOAD DATA INFILE to read a csv formatted file into a 
mysql 5.0.22 table.  Some of the fields contain text that has newline 
characters in it.  After reading the manual to learn how special 
characters are treated, I altered the csv file so newlines are 
represented by '\\n', that is two backslashes followed by a lowercase n.


After loading the file, I find that the text fields do indeed contain 
'\n' as I hoped (one of the backslashes was stripped during import).  
But for some reason these newlines aren't treated as such by mysql or 
php.  For example, the php function nl2br does not recognize and 
substitute them.


I have other similar tables in the same database that I successfully 
constructed using the same method, but that was with a previous 
version of mysql (4.something). Something is different now.


The command I used was:

mysql load data infile '/share/store/library.csv' into table library
fields terminated by ','
enclosed by ''
lines terminated by '\r\n';



You need to quote the actual linefeed character, not a backslash n.


OK, I wrote a little perl script to replace \n with a real newline character and now 
it works, even without adding a backslash in front of it.  I don't know why it used 
to work without this.  Maybe the older version of phpmyadmin I was using did some 
kind of automatic conversion before inserting.  Thanks for your help.

Mark


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Load Data Infile and newlines

2006-08-07 Thread Mark Nienberg
I'm trying to use LOAD DATA INFILE to read a csv formatted file into a mysql 5.0.22 
table.  Some of the fields contain text that has newline characters in it.  After 
reading the manual to learn how special characters are treated, I altered the csv 
file so newlines are represented by '\\n', that is two backslashes followed by a 
lowercase n.


After loading the file, I find that the text fields do indeed contain '\n' as I hoped 
(one of the backslashes was stripped during import).  But for some reason these 
newlines aren't treated as such by mysql or php.  For example, the php function nl2br 
does not recognize and substitute them.


I have other similar tables in the same database that I successfully constructed 
using the same method, but that was with a previous version of mysql (4.something). 
Something is different now.


The command I used was:

mysql load data infile '/share/store/library.csv' into table library
fields terminated by ','
enclosed by ''
lines terminated by '\r\n';

Any help appreciated.
Mark


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Way too slow Load Data Infile

2006-07-30 Thread mos
I was able to speed it up somewhat by increasing the Key_buffer_size to 
512M and its down to 4 hours to load 30 million rows. Still I find that's 
pretty slow. Is there anything else I can do to speed it up? TIA


Mike

At 10:50 PM 7/28/2006, mos wrote:
I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. 
Here are the settings I'm using. Is there anything in there that will 
speed up the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | 
Value   |

+-+-+
| back_log| 
50  |
| basedir | 
u:\mysql\   |
| bdb_cache_size  | 
8388600 |

| bdb_home|   |
| bdb_log_buffer_size | 
0   |

| bdb_logdir  |   |
| bdb_max_lock| 
1   |
| bdb_shared_data | 
OFF |

| bdb_tmpdir  |   |
| binlog_cache_size   | 
32768   |
| bulk_insert_buffer_size | 
33554432|
| character_set_client| 
latin1  |
| character_set_connection| 
latin1  |
| character_set_database  | 
latin1  |
| character_set_results   | 
latin1  |
| character_set_server| 
latin1  |
| character_set_system| 
utf8|
| character_sets_dir  | 
u:\mysql\share\charsets/|
| collation_connection| 
latin1_swedish_ci   |
| collation_database  | 
latin1_swedish_ci   |
| collation_server| 
latin1_swedish_ci   |
| concurrent_insert   | 
ON  |
| connect_timeout | 
5   |
| datadir | 
u:\mysql_data\  |
| date_format | 
%Y-%m-%d|
| datetime_format | %Y-%m-%d 
%H:%i:%s   |
| default_week_format | 
0   |
| delay_key_write | 
OFF |
| delayed_insert_limit| 
100 |
| delayed_insert_timeout  | 
300 |
| delayed_queue_size  | 
1000|
| expire_logs_days| 
0   |
| flush   | 
OFF |
| flush_time  | 
1800|
| ft_boolean_syntax   | + 
-()~*:|  |
| ft_max_word_len | 
84  |
| ft_min_word_len | 
4   |
| ft_query_expansion_limit| 
20  |
| ft_stopword_file| 
(built-in)  |
| group_concat_max_len| 
1024|
| have_archive| 
NO  |
| have_bdb| 
DISABLED|
| have_compress   | 
YES

Re: Way too slow Load Data Infile

2006-07-29 Thread C.R.Vegelin

Hi Mike,

Try the following:

ALTER TABLE tblname DISABLE KEYS;
LOAD DATA INFILE ...
ALTER TABLE tblname ENABLE KEYS;

hth, Cor

- Original Message - 
From: mos [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Saturday, July 29, 2006 4:50 AM
Subject: Re: Way too slow Load Data Infile


I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. Here 
are the settings I'm using. Is there anything in there that will speed up 
the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | Value 
|

+-+-+
| back_log| 50 
|
| basedir | u:\mysql\ 
|
| bdb_cache_size  | 8388600 
|

| bdb_home| |
| bdb_log_buffer_size | 0 
|

| bdb_logdir  | |
| bdb_max_lock| 1 
|
| bdb_shared_data | OFF 
|

| bdb_tmpdir  | |
| binlog_cache_size   | 32768 
|
| bulk_insert_buffer_size | 33554432 
|
| character_set_client| latin1 
|
| character_set_connection| latin1 
|
| character_set_database  | latin1 
|
| character_set_results   | latin1 
|
| character_set_server| latin1 
|
| character_set_system| utf8 
|
| character_sets_dir  | u:\mysql\share\charsets/ 
|
| collation_connection| latin1_swedish_ci 
|
| collation_database  | latin1_swedish_ci 
|
| collation_server| latin1_swedish_ci 
|
| concurrent_insert   | ON 
|
| connect_timeout | 5 
|
| datadir | u:\mysql_data\ 
|
| date_format | %Y-%m-%d 
|
| datetime_format | %Y-%m-%d %H:%i:%s 
|
| default_week_format | 0 
|
| delay_key_write | OFF 
|
| delayed_insert_limit| 100 
|
| delayed_insert_timeout  | 300 
|
| delayed_queue_size  | 1000 
|
| expire_logs_days| 0 
|
| flush   | OFF 
|
| flush_time  | 1800 
|
| ft_boolean_syntax   | + -()~*:| 
|
| ft_max_word_len | 84 
|
| ft_min_word_len | 4 
|
| ft_query_expansion_limit| 20 
|
| ft_stopword_file| (built-in) 
|
| group_concat_max_len| 1024 
|
| have_archive| NO 
|
| have_bdb| DISABLED 
|
| have_compress   | YES 
|
| have_crypt  | NO 
|
| have_csv| NO 
|
| have_example_engine | NO 
|
| have_geometry   | YES 
|
| have_innodb | DISABLED 
|
| have_isam   | NO 
|
| have_ndbcluster | NO 
|
| have_openssl| NO 
|
| have_query_cache| YES 
|
| have_raid   | NO 
|
| have_rtree_keys | YES 
|
| have_symlink| YES 
|

| init_connect| |
| init_file   | |
| init_slave  | |
| innodb_additional_mem_pool_size | 1048576 
|
| innodb_autoextend_increment | 8 
|
| innodb_buffer_pool_awe_mem_mb   | 0 
|
| innodb_buffer_pool_size | 8388608 
|

| innodb_data_file_path   | |
| innodb_data_home_dir| |
| innodb_fast_shutdown| ON 
|
| innodb_file_io_threads  | 4 
|
| innodb_file_per_table   | OFF 
|
| innodb_flush_log_at_trx_commit  | 1 
|

| innodb_flush_method | |
| innodb_force_recovery   | 0 
|
| innodb_lock_wait_timeout| 50 
|
| innodb_locks_unsafe_for_binlog  | OFF 
|

| innodb_log_arch_dir | |
| innodb_log_archive  | OFF 
|
| innodb_log_buffer_size  | 1048576 
|
| innodb_log_file_size| 5242880 
|
| innodb_log_files_in_group   | 2 
|

| innodb_log_group_home_dir   | |
| innodb_max_dirty_pages_pct  | 90 
|
| innodb_max_purge_lag| 0 
|
| innodb_mirrored_log_groups  | 1 
|
| innodb_open_files   | 300 
|
| innodb_table_locks  | ON 
|
| innodb_thread_concurrency   | 8 
|
| interactive_timeout | 28800 
|
| join_buffer_size| 33550336 
|
| key_buffer_size | 67108864 
|
| key_cache_age_threshold | 300 
|
| key_cache_block_size| 1024 
|
| key_cache_division_limit| 100 
|
| language| u:\mysql\share\english\ 
|
| large_files_support | ON 
|
| license | GPL 
|
| local_infile

Re: Way too slow Load Data Infile

2006-07-29 Thread mos

At 03:56 AM 7/29/2006, C.R.Vegelin wrote:

Hi Mike,

Try the following:

ALTER TABLE tblname DISABLE KEYS;
LOAD DATA INFILE ...
ALTER TABLE tblname ENABLE KEYS;

hth, Cor



Hi,
I had tried that on another (larger) t year, and it does of course 
load the data slightly faster because the non-unique keys are disabled (the 
unique keys  primary key are still enabled), but when it tries to rebuild 
the index with the Alter Table Enable Keys, the same thing happens. It runs 
for days trying to rebuild the index. It looks like rebuilding the index is 
disk bound even though there is 400MB still free and available for use. I 
could go out and buy more RAM but I doubt with my current MySQL settings 
it's going to do any good. So if there is a way to improve the index 
building by adjusting my MySQL settings, I'd certainly like to hear from 
you. If I don't get this solved soon, I may have to abandon MySQL and use 
another database. :(


Mike



- Original Message - From: mos [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Saturday, July 29, 2006 4:50 AM
Subject: Re: Way too slow Load Data Infile


I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. 
Here are the settings I'm using. Is there anything in there that will 
speed up the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | Value |
+-+-+
| back_log| 50 |
| basedir | u:\mysql\ |
| bdb_cache_size  | 8388600 |
| bdb_home| |
| bdb_log_buffer_size | 0 |
| bdb_logdir  | |
| bdb_max_lock| 1 |
| bdb_shared_data | OFF |
| bdb_tmpdir  | |
| binlog_cache_size   | 32768 |
| bulk_insert_buffer_size | 33554432 |
| character_set_client| latin1 |
| character_set_connection| latin1 |
| character_set_database  | latin1 |
| character_set_results   | latin1 |
| character_set_server| latin1 |
| character_set_system| utf8 |
| character_sets_dir  | u:\mysql\share\charsets/ |
| collation_connection| latin1_swedish_ci |
| collation_database  | latin1_swedish_ci |
| collation_server| latin1_swedish_ci |
| concurrent_insert   | ON |
| connect_timeout | 5 |
| datadir | u:\mysql_data\ |
| date_format | %Y-%m-%d |
| datetime_format | %Y-%m-%d %H:%i:%s |
| default_week_format | 0 |
| delay_key_write | OFF |
| delayed_insert_limit| 100 |
| delayed_insert_timeout  | 300 |
| delayed_queue_size  | 1000 |
| expire_logs_days| 0 |
| flush   | OFF |
| flush_time  | 1800 |
| ft_boolean_syntax   | + -()~*:| |
| ft_max_word_len | 84 |
| ft_min_word_len | 4 |
| ft_query_expansion_limit| 20 |
| ft_stopword_file| (built-in) |
| group_concat_max_len| 1024 |
| have_archive| NO |
| have_bdb| DISABLED |
| have_compress   | YES |
| have_crypt  | NO |
| have_csv| NO |
| have_example_engine | NO |
| have_geometry   | YES |
| have_innodb | DISABLED |
| have_isam   | NO |
| have_ndbcluster | NO |
| have_openssl| NO |
| have_query_cache| YES |
| have_raid   | NO |
| have_rtree_keys | YES |
| have_symlink| YES |
| init_connect| |
| init_file   | |
| init_slave  | |
| innodb_additional_mem_pool_size | 1048576 |
| innodb_autoextend_increment | 8 |
| innodb_buffer_pool_awe_mem_mb   | 0 |
| innodb_buffer_pool_size | 8388608 |
| innodb_data_file_path   | |
| innodb_data_home_dir| |
| innodb_fast_shutdown| ON |
| innodb_file_io_threads  | 4 |
| innodb_file_per_table   | OFF |
| innodb_flush_log_at_trx_commit  | 1 |
| innodb_flush_method | |
| innodb_force_recovery   | 0 |
| innodb_lock_wait_timeout| 50 |
| innodb_locks_unsafe_for_binlog  | OFF |
| innodb_log_arch_dir | |
| innodb_log_archive  | OFF |
| innodb_log_buffer_size  | 1048576 |
| innodb_log_file_size| 5242880

Way too slow Load Data Infile

2006-07-28 Thread mos
I executed a Load Data Infile 24 hours ago and its still running. Yikes! 
I have 6 of these tables to load and at this rate it will take forever. Can 
someone please run through my Status variables and let me know if there is 
a way I can optimize it? (These values are current with the Load Data still 
running.) This is running on a dedicated AMD 3500 machine without no one 
else accessing it. There is 400+MB of memory remaining on a 1gb machine 
running XP.


TIA
Mike

+--++
| Variable_name| Value  |
+--++
| Aborted_clients  | 12 |
| Aborted_connects | 0  |
| Binlog_cache_disk_use| 0  |
| Binlog_cache_use | 0  |
| Bytes_received   | 680821739  |
| Bytes_sent   | 2089813461 |
| Com_admin_commands   | 19 |
| Com_alter_db | 0  |
| Com_alter_table  | 2  |
| Com_analyze  | 0  |
| Com_backup_table | 0  |
| Com_begin| 4  |
| Com_change_db| 0  |
| Com_change_master| 0  |
| Com_check| 0  |
| Com_checksum | 0  |
| Com_commit   | 4  |
| Com_create_db| 0  |
| Com_create_function  | 0  |
| Com_create_index | 0  |
| Com_create_table | 87 |
| Com_dealloc_sql  | 0  |
| Com_delete   | 117|
| Com_delete_multi | 0  |
| Com_do   | 0  |
| Com_drop_db  | 0  |
| Com_drop_function| 0  |
| Com_drop_index   | 0  |
| Com_drop_table   | 88 |
| Com_drop_user| 0  |
| Com_execute_sql  | 0  |
| Com_flush| 0  |
| Com_grant| 0  |
| Com_ha_close | 0  |
| Com_ha_open  | 0  |
| Com_ha_read  | 0  |
| Com_help | 0  |
| Com_insert   | 1  |
| Com_insert_select| 156|
| Com_kill | 0  |
| Com_load | 18 |
| Com_load_master_data | 0  |
| Com_load_master_table| 0  |
| Com_lock_tables  | 0  |
| Com_optimize | 0  |
| Com_preload_keys | 0  |
| Com_prepare_sql  | 0  |
| Com_purge| 0  |
| Com_purge_before_date| 0  |
| Com_rename_table | 0  |
| Com_repair   | 0  |
| Com_replace  | 0  |
| Com_replace_select   | 0  |
| Com_reset| 0  |
| Com_restore_table| 0  |
| Com_revoke   | 0  |
| Com_revoke_all   | 0  |
| Com_rollback | 0  |
| Com_savepoint| 0  |
| Com_select   | 961267 |
| Com_set_option   | 69 |
| Com_show_binlog_events   | 0  |
| Com_show_binlogs | 0  |
| Com_show_charsets| 0  |
| Com_show_collations  | 0  |
| Com_show_column_types| 0  |
| Com_show_create_db   | 0  |
| Com_show_create_table| 0  |
| Com_show_databases   | 0  |
| Com_show_errors  | 0  |
| Com_show_fields  | 12 |
| Com_show_grants  | 0  |
| Com_show_innodb_status   | 0  |
| Com_show_keys| 10 |
| Com_show_logs| 0  |
| Com_show_master_status   | 0  |
| Com_show_new_master  | 0  |
| Com_show_open_tables | 0  |
| Com_show_privileges  | 0  |
| Com_show_processlist | 19 |
| Com_show_slave_hosts | 0  |
| Com_show_slave_status| 0  |
| Com_show_status  | 4  |
| Com_show_storage_engines | 0  |
| Com_show_tables  | 14 |
| Com_show_variables   | 2  |
| Com_show_warnings| 0  |
| Com_slave_start  | 0  |
| Com_slave_stop   | 0  |
| Com_truncate | 0  |
| Com_unlock_tables| 0  |
| Com_update   | 28461549   |
| Com_update_multi | 0  |
| Connections  | 13 |
| Created_tmp_disk_tables  | 0  |
| Created_tmp_files| 0  |
| Created_tmp_tables   | 91 |
| Delayed_errors   | 0  |
| Delayed_insert_threads   | 0  |
| Delayed_writes   | 0  |
| Flush_commands   | 1  |
| Handler_commit   | 0  |
| Handler_delete   | 444013 |
| Handler_discover | 0  |
| Handler_read_first

Re: Way too slow Load Data Infile

2006-07-28 Thread mos
I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. Here 
are the settings I'm using. Is there anything in there that will speed up 
the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | 
Value   |

+-+-+
| back_log| 
50  |
| basedir | 
u:\mysql\   |
| bdb_cache_size  | 
8388600 |
| 
bdb_home| 
  |
| bdb_log_buffer_size | 
0   |
| 
bdb_logdir  | 
  |
| bdb_max_lock| 
1   |
| bdb_shared_data | 
OFF |
| 
bdb_tmpdir  | 
  |
| binlog_cache_size   | 
32768   |
| bulk_insert_buffer_size | 
33554432|
| character_set_client| 
latin1  |
| character_set_connection| 
latin1  |
| character_set_database  | 
latin1  |
| character_set_results   | 
latin1  |
| character_set_server| 
latin1  |
| character_set_system| 
utf8|
| character_sets_dir  | 
u:\mysql\share\charsets/|
| collation_connection| 
latin1_swedish_ci   |
| collation_database  | 
latin1_swedish_ci   |
| collation_server| 
latin1_swedish_ci   |
| concurrent_insert   | 
ON  |
| connect_timeout | 
5   |
| datadir | 
u:\mysql_data\  |
| date_format | 
%Y-%m-%d|
| datetime_format | %Y-%m-%d 
%H:%i:%s   |
| default_week_format | 
0   |
| delay_key_write | 
OFF |
| delayed_insert_limit| 
100 |
| delayed_insert_timeout  | 
300 |
| delayed_queue_size  | 
1000|
| expire_logs_days| 
0   |
| flush   | 
OFF |
| flush_time  | 
1800|
| ft_boolean_syntax   | + 
-()~*:|  |
| ft_max_word_len | 
84  |
| ft_min_word_len | 
4   |
| ft_query_expansion_limit| 
20  |
| ft_stopword_file| 
(built-in)  |
| group_concat_max_len| 
1024|
| have_archive| 
NO  |
| have_bdb| 
DISABLED|
| have_compress   | 
YES |
| have_crypt  | 
NO  |
| have_csv| 
NO  |
| have_example_engine

Load Data Infile Replace ... too slow

2006-06-11 Thread mos
I'm replacing 14 million rows of data using Load data infile replace and 
it is taking forever to complete. I killed it after 6.2 hours on an AMD 
3500 machine. I then deleted all the data from the table and used Load 
data infile ignore and it completed quite quickly in about 30 minutes.  Is 
there any way to speed up using Load data infile replace? TIA


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE and BIT columns

2006-02-27 Thread sheeri kritzer
Hi Julie,

If you notice after your import, you have 3 warnings.  This intrigued
me, so I created a test case (also running 5.0.18 standard):

create table bit_test (b bit(8));

cat /tmp/bit_test.txt

01010101
2
b'010'
b\'010\'
0x2
02

mysql load data infile '/tmp/bit_test.txt' into table bit_test;

Query OK, 6 rows affected, 5 warnings (0.05 sec)
Records: 6  Deleted: 0  Skipped: 0  Warnings: 5


Hrm.  I got 5 warnings; you'd only gotten 2.  Weird!


mysql show warnings;
+-+--+-+
| Level   | Code | Message |
+-+--+-+
| Warning | 1264 | Out of range value adjusted for column 'b' at row 1 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 3 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 4 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 5 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 6 |
+-+--+-+
5 rows in set (0.02 sec)


What this says to me is that the values were too big, for all but row 2.

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
+--+
6 rows in set (0.05 sec)

so the  values make sense -- the values were larger than the
largest value, so it truncated it to the largest value.  But why, when
I insert a 2, does it use 11010 instead of 10?

Let's test:


mysql insert into bit_test VALUES (2);
Query OK, 1 row affected (0.00 sec)

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
| 10   |
+--+
7 rows in set (0.00 sec)


That makes sense!  the last value is 10, which makes sense for a
binary value of 2.  On a hunch, I tried to see what happened if it
treated 2 as a string, not an integer:


mysql insert into bit_test VALUES ('2');
Query OK, 1 row affected (0.00 sec)

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
| 10   |
| 110010   |
+--+
8 rows in set (0.01 sec)


Aha!   the culprit -- it was thinking that the 2 in the file was a
string, not an int.


Hope this helped,

-Sheeri

On 2/24/06, Julie Kelner [EMAIL PROTECTED] wrote:
 Hi. I'm using MySQL 5.0.18, and I'm trying to use LOAD DATA INFILE into 
 tables that have BIT(8) columns. No matter
 what format I use, the result is not what I expect (see example below.) 
 Anyone know how to properly format the data for loading into a BIT column? 
 Thanks!


 $ cat /tmp/bit_test.txt
 01010101
 2
 b'010'
 b\'010\'
 0x2
 02


 mysql create table bit_test (b bit(8));
 Query OK, 0 rows affected (0.01 sec)

 mysql load data infile '/tmp/bit_test.txt' into table bit_test;
 Query OK, 6 rows affected, 3 warnings (0.00 sec)
 Records: 6  Deleted: 0  Skipped: 0  Warnings: 3

 mysql select bin(b+0) from bit_test;
 +--+
 | bin(b+0) |
 +--+
 |  |
 | 110010   |
 |  |
 |  |
 |  |
 |  |
 +--+
 6 rows in set (0.00 sec)



 Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE and BIT columns

2006-02-27 Thread Julie Kelner

Sheeri,

Wow. That was my first forum email and I thought it would go unnoticed. I 
sure was wrong.


You are exactly right, because apparently with LOAD DATA INFILE, everything 
in the file is treated as a string. I'm using PHP to create the text file, 
so I tried PHP's pack() function to write '2' as binary data. And...it 
worked!


Thanks so much for your input.
~ Julie

- Original Message - 
From: sheeri kritzer [EMAIL PROTECTED]

To: Julie Kelner [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, February 27, 2006 12:50 PM
Subject: Re: LOAD DATA INFILE and BIT columns


Hi Julie,

If you notice after your import, you have 3 warnings.  This intrigued
me, so I created a test case (also running 5.0.18 standard):

create table bit_test (b bit(8));

cat /tmp/bit_test.txt

01010101
2
b'010'
b\'010\'
0x2
02

mysql load data infile '/tmp/bit_test.txt' into table bit_test;

Query OK, 6 rows affected, 5 warnings (0.05 sec)
Records: 6  Deleted: 0  Skipped: 0  Warnings: 5


Hrm.  I got 5 warnings; you'd only gotten 2.  Weird!


mysql show warnings;
+-+--+-+
| Level   | Code | Message |
+-+--+-+
| Warning | 1264 | Out of range value adjusted for column 'b' at row 1 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 3 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 4 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 5 |
| Warning | 1264 | Out of range value adjusted for column 'b' at row 6 |
+-+--+-+
5 rows in set (0.02 sec)


What this says to me is that the values were too big, for all but row 2.

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
+--+
6 rows in set (0.05 sec)

so the  values make sense -- the values were larger than the
largest value, so it truncated it to the largest value.  But why, when
I insert a 2, does it use 11010 instead of 10?

Let's test:


mysql insert into bit_test VALUES (2);
Query OK, 1 row affected (0.00 sec)

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
| 10   |
+--+
7 rows in set (0.00 sec)


That makes sense!  the last value is 10, which makes sense for a
binary value of 2.  On a hunch, I tried to see what happened if it
treated 2 as a string, not an integer:


mysql insert into bit_test VALUES ('2');
Query OK, 1 row affected (0.00 sec)

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
| 10   |
| 110010   |
+--+
8 rows in set (0.01 sec)


Aha!   the culprit -- it was thinking that the 2 in the file was a
string, not an int.


Hope this helped,

-Sheeri

On 2/24/06, Julie Kelner [EMAIL PROTECTED] wrote:
Hi. I'm using MySQL 5.0.18, and I'm trying to use LOAD DATA INFILE into 
tables that have BIT(8) columns. No matter
what format I use, the result is not what I expect (see example below.) 
Anyone know how to properly format the data for loading into a BIT column? 
Thanks!



$ cat /tmp/bit_test.txt
01010101
2
b'010'
b\'010\'
0x2
02


mysql create table bit_test (b bit(8));
Query OK, 0 rows affected (0.01 sec)

mysql load data infile '/tmp/bit_test.txt' into table bit_test;
Query OK, 6 rows affected, 3 warnings (0.00 sec)
Records: 6  Deleted: 0  Skipped: 0  Warnings: 3

mysql select bin(b+0) from bit_test

LOAD DATA INFILE and BIT columns

2006-02-24 Thread Julie Kelner
Hi. I'm using MySQL 5.0.18, and I'm trying to use LOAD DATA INFILE into tables 
that have BIT(8) columns. No matter
what format I use, the result is not what I expect (see example below.) Anyone 
know how to properly format the data for loading into a BIT column? Thanks!


$ cat /tmp/bit_test.txt
01010101
2
b'010'
b\'010\'
0x2
02


mysql create table bit_test (b bit(8));
Query OK, 0 rows affected (0.01 sec)

mysql load data infile '/tmp/bit_test.txt' into table bit_test;
Query OK, 6 rows affected, 3 warnings (0.00 sec)
Records: 6  Deleted: 0  Skipped: 0  Warnings: 3

mysql select bin(b+0) from bit_test;
+--+
| bin(b+0) |
+--+
|  |
| 110010   |
|  |
|  |
|  |
|  |
+--+
6 rows in set (0.00 sec)



Thanks!


LOAD DATA INFILE

2006-01-11 Thread Jay Paulson \(CE CEN\)
I'm creating a file via PHP after getting information from a log file.  I 
create a new file for data import into a table in MySQL.  For some reason now 
I'm getting this error below.  I have no clue what it means.  I've checked the 
file and it is all there and I've even changed the permissions on it so that 
anyone can do anything with it, but still no luck.  Any ideas on what I could 
do?

Can't get stat of './import_file.txt' (Errcode: 13)


RE: LOAD DATA INFILE (SOLVED!)

2006-01-11 Thread Jay Paulson \(CE CEN\)
For any of you that run into this problem I found this:

After receiving some help from a friend it turns out that one of the
directories within the path to the target destination was not flagged 755,
and because of that one ( which happened to be 3 levels back) it didn't
allow mysql to properly execute what it needed.  So if you run into this
simply check each directory within the path and make sure of it's
settings


-Original Message-
From: Jay Paulson (CE CEN) [mailto:[EMAIL PROTECTED]
Sent: Wed 1/11/2006 11:17 AM
To: mysql@lists.mysql.com
Subject: LOAD DATA INFILE
 
I'm creating a file via PHP after getting information from a log file.  I 
create a new file for data import into a table in MySQL.  For some reason now 
I'm getting this error below.  I have no clue what it means.  I've checked the 
file and it is all there and I've even changed the permissions on it so that 
anyone can do anything with it, but still no luck.  Any ideas on what I could 
do?

Can't get stat of './import_file.txt' (Errcode: 13)



Re: LOAD DATA INFILE

2006-01-11 Thread praj

Do chmod -R 755 on datapath

Thanks
Praj
- Original Message - 
From: Jay Paulson (CE CEN) [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Wednesday, January 11, 2006 10:47 PM
Subject: LOAD DATA INFILE


I'm creating a file via PHP after getting information from a log file.  I 
create a new file for data import into a table in MySQL.  For some reason 
now I'm getting this error below.  I have no clue what it means.  I've 
checked the file and it is all there and I've even changed the permissions 
on it so that anyone can do anything with it, but still no luck.  Any ideas 
on what I could do?


Can't get stat of './import_file.txt' (Errcode: 13)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA INFILE and Extended ASCII....

2006-01-11 Thread Jake Peavy
Hey yall,

I have a delimited text file I'm trying to load into my db.  The problem is
that for whatever reason (totally beyond my control) the line and field
delimiters include an Extended ASCII char (0xA0) like so:

05HIUT841427BLCA á  Dell Computer áOne *Dell* Way, *Round
Rock*Texas 78682 áDELL á  TX áEastern
Operations
á áá

Is there any way I can set the FIELD TERMINATOR and LINE TERMINATOR to
include this character so my LOAD DATA INFILE works?

I tried using a variable and mysql didn't like it:

set @field:=concat(' ',char(160),'\t');
set @line:=concat('\t',char(160),char(160),'\n');
load data infile
 'C:\\Documents and Settings\\jpeavy1\\Desktop\\codes.20060109-
112400.txt'
into table
 t.codes
fields
 terminated by @field
 optionally enclosed by ''
lines
 terminated by @line
ignore
 7 lines;

TIA,
jp
[mysql v5.0.16-nt on WinXP]


Re: Sorry for my n00bie question - mysql 5.0.18 with load data infile

2006-01-07 Thread Gleb Paharenko
Hello.



For MyISAM tables you may want to use ALTER TABLE ... DISABLE KEYS,

ALTER TABLE ... ENABLE KEYS as described in the manual:

  http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html



Increasing of the bulk_insert_buffer_size can help as well.







George Law [EMAIL PROTECTED] wrote:





List:  MySQL General Discussion« Previous MessageNext Message »

From:  George Law  Date:   January 6 2006 11:01pm

Subject:   Sorry for my n00bie question - mysql 5.0.18 with load data 
infile

Get Plain Text  



Hi All,

Just wanted to apologize for my earlier rambling emails.

I am been working on recoving from a innodb corruption for 2 days now

and was really in panic mode.

 

Everything seems to be running well with 5.0.18, althought my server

load it up there.

I am running 5.0.18 on a Suse 9.3 box, P4 3 GHZ, 4 GB RAM

 

 

 16:38:57 up 86 days,  7:20,  4 users,  load average: 4.44, 4.41, 4.51

USER TTYLOGIN@   IDLE   JCPU   PCPU WHAT

root tty1  01Nov05 66days  0.23s  0.23s -bash

 

this has been pretty much a sustained all day long.

 

 

I have a perl script that takes some text cdrs, reformats them into .csv

files, and then slams then into mysql using 'load data local'.

Every 10 minutes, I process the raw cdrs, separate the completes and

incompletes, generate 3 .csv files, and do 3 'load data local' commands.

completed cdrs go into a innodb table, incompletes and raw cdrs go into

a myISAM table. 

 

 

The tables are fixed length - no varchars.  everything is forced to the

full char length in my perl, so my 'load data' generates no warnings on

truncated fields.

 

The reason I did it this way was the incomp cdrs and raw cdrs are all

read only.  The completed cdrs have fields that get updated at a later

time.

There seemed to be a lot of processes out there waiting on table locks

when this was a myisam table.

 

The performance of the 'load data infile' command seems like it is

taking way too long.

 

(dates are just from `date` in perl):

2006-01-06 16:35:21 : begin import into comp_cdr

2006-01-06 16:35:42 : end import into comp_cdr (1361) (0)(0)(0)

 

innodb table: 65 inserts per second.



2006-01-06 16:35:42 : begin import into incomp_cdr

2006-01-06 16:39:10 : import into incomp_cdr2 (15084) (0)(0)(0)

Processed 16445 CDRs in 429.60 seconds - 38.28 cdr/sec 1361 complete

calls, 15084 incomplete calls



the myisam table takes longer... 75 inserts a second.

 

 

My understanding is that 'load data' should be considerably faster than

doing one big loop in perl and doing inserts.  I haven't went back to

perl for this, but that might be my next step.

 

anyone have any suggestions.  I am not posting any table descriptions

here, but will do if needed.

--

 

 



George Law

VoIP Network Developer

864-678-3161

[EMAIL PROTECTED]

MSN: [EMAIL PROTECTED]



 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Sorry for my n00bie question - mysql 5.0.18 with load data infile

2006-01-06 Thread George Law
Hi All,
Just wanted to apologize for my earlier rambling emails.
I am been working on recoving from a innodb corruption for 2 days now
and was really in panic mode.
 
Everything seems to be running well with 5.0.18, althought my server
load it up there.
I am running 5.0.18 on a Suse 9.3 box, P4 3 GHZ, 4 GB RAM
 
 
 16:38:57 up 86 days,  7:20,  4 users,  load average: 4.44, 4.41, 4.51
USER TTYLOGIN@   IDLE   JCPU   PCPU WHAT
root tty1  01Nov05 66days  0.23s  0.23s -bash
 
this has been pretty much a sustained all day long.
 
 
I have a perl script that takes some text cdrs, reformats them into .csv
files, and then slams then into mysql using 'load data local'.
Every 10 minutes, I process the raw cdrs, separate the completes and
incompletes, generate 3 .csv files, and do 3 'load data local' commands.
completed cdrs go into a innodb table, incompletes and raw cdrs go into
a myISAM table. 
 
 
The tables are fixed length - no varchars.  everything is forced to the
full char length in my perl, so my 'load data' generates no warnings on
truncated fields.
 
The reason I did it this way was the incomp cdrs and raw cdrs are all
read only.  The completed cdrs have fields that get updated at a later
time.
There seemed to be a lot of processes out there waiting on table locks
when this was a myisam table.
 
The performance of the 'load data infile' command seems like it is
taking way too long.
 
(dates are just from `date` in perl):
2006-01-06 16:35:21 : begin import into comp_cdr
2006-01-06 16:35:42 : end import into comp_cdr (1361) (0)(0)(0)
 
innodb table: 65 inserts per second.

2006-01-06 16:35:42 : begin import into incomp_cdr
2006-01-06 16:39:10 : import into incomp_cdr2 (15084) (0)(0)(0)
Processed 16445 CDRs in 429.60 seconds - 38.28 cdr/sec 1361 complete
calls, 15084 incomplete calls

the myisam table takes longer... 75 inserts a second.
 
 
My understanding is that 'load data' should be considerably faster than
doing one big loop in perl and doing inserts.  I haven't went back to
perl for this, but that might be my next step.
 
anyone have any suggestions.  I am not posting any table descriptions
here, but will do if needed.
--
 
 

George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]

 

 


LOAD DATA INFILE Syntax

2005-12-12 Thread Elliot Kleiman
Hi mysql-list,

I just installed,

++
| version()  |
++
| 5.0.16-log |
++

Here is what I am testing out: (simple table and data)

% echo 'a b c d'  testfile

mysql CREATE TABLE `test` (
- `fe` VARCHAR( 2 ),
- `fi` VARCHAR( 2 ),
- `fo` VARCHAR( 2 ),
- `fum` VARCHAR( 2 )
- );
Query OK, 0 rows affected (0.00 sec)

mysql load data local infile './test' into table test;
ERROR 1148 (42000): The used command is not allowed with this MySQL
version

Q: What am I doing wrong(missing)?

  { I did not see it in the documentation online }
  http://dev.mysql.com/doc/refman/5.0/en/load-data.html

Thanks, Elliot

P.S.
The same command works fine in previous versions of MySQL.

__
WizardsWorks Cluster
http://www.wizardsworks.org/




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LOAD DATA INFILE Syntax

2005-12-12 Thread Logan, David (SST - Adelaide)
Hi Elliot,

It is in the docs, just a little bit further down the page 8-)

LOCAL works only if your server and your client both have been enabled
to allow it. For example, if mysqld was started with --local-infile=0,
then LOCAL  does not work. See Section 5.6.4, Security Issues with LOAD
DATA LOCAL. 

Regards

David Logan
Database Administrator
HP Managed Services
148 Frome Street,
Adelaide 5000
Australia

+61 8 8408 4273 - Work
+61 417 268 665 - Mobile
+61 8 8408 4259 - Fax


-Original Message-
From: Elliot Kleiman [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 13 December 2005 8:12 AM
To: mysql@lists.mysql.com
Subject: LOAD DATA INFILE Syntax

Hi mysql-list,

I just installed,

++
| version()  |
++
| 5.0.16-log |
++

Here is what I am testing out: (simple table and data)

% echo 'a b c d'  testfile

mysql CREATE TABLE `test` (
- `fe` VARCHAR( 2 ),
- `fi` VARCHAR( 2 ),
- `fo` VARCHAR( 2 ),
- `fum` VARCHAR( 2 )
- );
Query OK, 0 rows affected (0.00 sec)

mysql load data local infile './test' into table test;
ERROR 1148 (42000): The used command is not allowed with this MySQL
version

Q: What am I doing wrong(missing)?

  { I did not see it in the documentation online }
  http://dev.mysql.com/doc/refman/5.0/en/load-data.html

Thanks, Elliot

P.S.
The same command works fine in previous versions of MySQL.

__
WizardsWorks Cluster
http://www.wizardsworks.org/




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE (url)

2005-12-05 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Arno Coetzee wrote:
 Test USER wrote:
 
 Can't get this to work, but i would like to specify LOAD DATA to use
 an INFILE from an URL.

 For example
 LOAD DATA LOCAL INFILE 'http://www.testserver.com/data.csv' But i get
 an error message saying file not found.

 Anyone know if this is even possible ?

 -
 FREE E-MAIL IN 1 MINUTE!
 - [EMAIL PROTECTED] - http://www.pc.nu

  

 it is possible .
 
 you have to write your own script which fetches the file (using wget)
 and placing the file in a directory where it can be accessed. then you
 can load that file.
 
 i currently have something like this running...
 
 i make a dump file on the remote server runing a script with crontab. i
 gzip that file just to make the file transfer faster . the local server
 then fetches the file (wget) and gunzips it and then i load the file.
 works pretty good.
 

The JDBC driver, and the mysqli interface for PHP support this for
LOAD DATA LOCAL INFILE, check the documentation for each if this is an
option for you.

-Mark


- --
Mark Matthews
MySQL AB, Software Development Manager - Connectivity
www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDlGOEtvXNTca6JD8RAlGyAJwNUa73auCDBjIyib7ikIBrUb1vsQCfUPV5
eoaYiyOOX4T3YVEuOrjj+9A=
=/wQd
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA INFILE (url)

2005-12-01 Thread Test USER
Can't get this to work, but i would like to specify LOAD DATA to use an INFILE 
from an URL.

For example
LOAD DATA LOCAL INFILE 'http://www.testserver.com/data.csv' 
But i get an error message saying file not found.

Anyone know if this is even possible ?

-
FREE E-MAIL IN 1 MINUTE!
 - [EMAIL PROTECTED] - http://www.pc.nu

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE (url)

2005-12-01 Thread Peter J Milanese
I have never seen this. Mysql would have to do a wget of the file then dump it. 
Last I knew it wasn't a web browser. There may be a way to do the wget inline 
though, or at least write something in shell or perl to do it. Is this cron'd 
or something, or a one time thing?



-
Sent from my NYPL BlackBerry Handheld.


- Original Message -
From: Test USER [EMAIL PROTECTED]
Sent: 12/01/2005 04:55 AM
To: mysql@lists.mysql.com
Subject: LOAD DATA INFILE (url)

Can't get this to work, but i would like to specify LOAD DATA to use an INFILE
from an URL.

For example
LOAD DATA LOCAL INFILE 'http://www.testserver.com/data.csv'
But i get an error message saying file not found.

Anyone know if this is even possible ?

-
FREE E-MAIL IN 1 MINUTE!
 - [EMAIL PROTECTED] - http://www.pc.nu

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE (url)

2005-12-01 Thread Test USER
Hi thanks for your answer!
I would like this to be done via cron 3-4 times a day.

Quoting Peter J Milanese [EMAIL PROTECTED]:

 I have never seen this. Mysql would have to do a wget of the file then
 dump it. Last I knew it wasn't a web browser. There may be a way to do
 the wget inline though, or at least write something in shell or perl to
 do it. Is this cron'd or something, or a one time thing?
 
 
 
 -
 Sent from my NYPL BlackBerry Handheld.
 
 
 - Original Message -
 From: Test USER [EMAIL PROTECTED]
 Sent: 12/01/2005 04:55 AM
 To: mysql@lists.mysql.com
 Subject: LOAD DATA INFILE (url)
 
 Can't get this to work, but i would like to specify LOAD DATA to use an
 INFILE
 from an URL.
 
 For example
 LOAD DATA LOCAL INFILE 'http://www.testserver.com/data.csv'
 But i get an error message saying file not found.
 
 Anyone know if this is even possible ?
 
 -
 FREE E-MAIL IN 1 MINUTE!
  - [EMAIL PROTECTED] - http://www.pc.nu
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 



 

-
FREE E-MAIL IN 1 MINUTE!
 - [EMAIL PROTECTED] - http://www.pc.nu

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE (url)

2005-12-01 Thread Arno Coetzee

Test USER wrote:

Can't get this to work, but i would like to specify LOAD DATA to use an INFILE 
from an URL.


For example
LOAD DATA LOCAL INFILE 'http://www.testserver.com/data.csv' 
But i get an error message saying file not found.


Anyone know if this is even possible ?

-
FREE E-MAIL IN 1 MINUTE!
- [EMAIL PROTECTED] - http://www.pc.nu

 


it is possible .

you have to write your own script which fetches the file (using wget) 
and placing the file in a directory where it can be accessed. then you 
can load that file.


i currently have something like this running...

i make a dump file on the remote server runing a script with crontab. i 
gzip that file just to make the file transfer faster . the local server 
then fetches the file (wget) and gunzips it and then i load the file. 
works pretty good.


--
Arno Coetzee
Developer
Flash Media Group
Office : +27 12 342 3490
Mobile : +27 82 693 6180
Fax : + 27 12 430 4269
www.flashmedia.co.za

FMG Total messaging solution.  For all your GSM (SMS and USSD) messaging needs.  
Quick and easy do-it-yourself interfaces. http://www.flashmedia.co.za/tms.php 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Is Load Data Infile or Update is faster?

2005-11-19 Thread mos
I am doing a balance line comparison between the rows of an existing table 
and a text file that has newer data in it. So I'm comparing the values 
field by field to the existing rows in the table. If any of the field 
values are different, I need to update the table with these new values. The 
table has around 25 million rows and usually only 1% of the table needs to 
be compared.


I've found 2 ways to update the table with the new values:

1) I could write the new values to a text file and then use Load Data 
InFile REPLACE ... which will replace the existing rows for the rows that 
need changing. The problem of course the REPLACE option means it will look 
up the old row using the primary key/unique key, deletes the row, then adds 
the new row. This is disk intensive.


2) The other option would be to execute an Update for each row that needs 
changing and set the changed columns individually. This means the existing 
row will not have to be deleted and only some of the existing row value(s) 
are changed. The problem is there could be 10,000 to 100,000 rows that need 
changing.


So which option is going to be faster? A Load Data Infile that deletes the 
old row and adds a new one, or thousands of Updates changing only 1 to 6 
values at a time?


TIA

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is Load Data Infile or Update is faster?

2005-11-19 Thread Rhino


- Original Message - 
From: mos [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Saturday, November 19, 2005 8:16 PM
Subject: Is Load Data Infile or Update is faster?


I am doing a balance line comparison between the rows of an existing table 
and a text file that has newer data in it. So I'm comparing the values 
field by field to the existing rows in the table. If any of the field 
values are different, I need to update the table with these new values. The 
table has around 25 million rows and usually only 1% of the table needs to 
be compared.


I've found 2 ways to update the table with the new values:

1) I could write the new values to a text file and then use Load Data 
InFile REPLACE ... which will replace the existing rows for the rows that 
need changing. The problem of course the REPLACE option means it will look 
up the old row using the primary key/unique key, deletes the row, then 
adds the new row. This is disk intensive.


2) The other option would be to execute an Update for each row that needs 
changing and set the changed columns individually. This means the existing 
row will not have to be deleted and only some of the existing row value(s) 
are changed. The problem is there could be 10,000 to 100,000 rows that 
need changing.


So which option is going to be faster? A Load Data Infile that deletes the 
old row and adds a new one, or thousands of Updates changing only 1 to 6 
values at a time?


Any answer we could give you on the basis of the information you have 
provided would be based largely on assumptions that may not be true in your 
particular case. You haven't said a word about your hardware or database 
design or whatever indexes are on your data, listing only three of the more 
obvious factors that you have omitted, any of which could have huge impacts 
on the answer. Even if you told us all of that, the performance experts 
could probably only ballpark the answer.


Wouldn't it be much, _much_ better if you did your own benchmark, using real 
data, on your own hardware and with your own database design and indexes to 
see which alternative would really work faster?


Rhino 




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.362 / Virus Database: 267.13.4/175 - Release Date: 18/11/2005


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is Load Data Infile or Update is faster?

2005-11-19 Thread mos

At 11:15 PM 11/19/2005, Rhino wrote:


- Original Message - From: mos [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Saturday, November 19, 2005 8:16 PM
Subject: Is Load Data Infile or Update is faster?


I am doing a balance line comparison between the rows of an existing 
table and a text file that has newer data in it. So I'm comparing the 
values field by field to the existing rows in the table. If any of the 
field values are different, I need to update the table with these new 
values. The table has around 25 million rows and usually only 1% of the 
table needs to be compared.


I've found 2 ways to update the table with the new values:

1) I could write the new values to a text file and then use Load Data 
InFile REPLACE ... which will replace the existing rows for the rows 
that need changing. The problem of course the REPLACE option means it 
will look up the old row using the primary key/unique key, deletes the 
row, then adds the new row. This is disk intensive.


2) The other option would be to execute an Update for each row that needs 
changing and set the changed columns individually. This means the 
existing row will not have to be deleted and only some of the existing 
row value(s) are changed. The problem is there could be 10,000 to 100,000 
rows that need changing.


So which option is going to be faster? A Load Data Infile that deletes 
the old row and adds a new one, or thousands of Updates changing only 1 
to 6 values at a time?
Any answer we could give you on the basis of the information you have 
provided would be based largely on assumptions that may not be true in 
your particular case. You haven't said a word about your hardware or 
database design
or whatever indexes are on your data, listing only three of the more 
obvious factors that you have omitted, any of which could have huge 
impacts on the answer. Even if you told us all of that, the performance 
experts could probably only ballpark the answer.


Wouldn't it be much, _much_ better if you did your own benchmark, using 
real data, on your own hardware and with your own database design and 
indexes to see which alternative would really work faster?


Sure but first I wanted to bounce if off the members in this group to see 
what they thought before I started writing code. I thought executing 
10,000-100,000 Update queries would be slower than executing one Load Data 
Infile Replace query because each line being loaded by Load Data is 
replaced by an Update query. But if you think the queries could be faster 
(depending on # of indexes being used), I'll write a benchmark program and 
find out. I just didn't want to waste an afternoon if I didn't have to.


Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Load data infile fails to put entire PDF into one record

2005-11-10 Thread Whil Hentzen

Gleb Paharenko wrote:

Hello.

Are you sure that you want to load PDF with LOAD DATA INFILE?


Well, I had been sure, but not so much anymore. s


Maybe LOAD_FILE is that you want:

  http://dev.mysql.com/doc/refman/5.0/en/string-functions.html


Yup, this was the ticket. I'd seen several references to PDFs and LOAD 
DATA INFILE. But this works great:


I'm using FoxPro to convert DBF data to MySQL, and have PDFs in FoxPro 
memo fields. Here's (for the benefit of the archives) what I did (this 
is xbase code):


* process each record in a DBF
scan
  * job_no is a unique value for a record
  m.lcPDF = load_file(' + curdir() + alltrim(job_no) + .pdf + ')

  * convert single backslashes to doubles - else you get a
  * Can't get stat of 'test.txt' (Errcode: 2) error
  m.lcPDF = strtran(m.lcPDF, \, \\)

  * create a string that consists of the INSERT command
  m.lcStrInsert = [insert into COMPLETE (job_no, pdffile) values ] ;
+ [(']+m.lcJob_no + [',] + m.lcPDF + [)]

  * use sql passthrough to execute the command and return a code
  * indicating success or failure
  m.liSuccess = sqlexec(m.liHandle, m.lcStrInsert)

  * (more code here, but removed for clarity)
endscan

Thanks,

Whil


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Load data infile fails to put entire PDF into one record

2005-11-09 Thread Gleb Paharenko
Hello.



Are you sure that you want to load PDF with LOAD DATA INFILE?

Maybe LOAD_FILE is that you want:

  http://dev.mysql.com/doc/refman/5.0/en/string-functions.html





Whil Hentzen wrote:

 Hi folks,

 

 I want to load a set of PDFs into a MySQL 5 ISAM table. I'm using the

 following command

 

 LOAD DATA INFILE '1037021.pdf' INTO TABLE complete FIELDS TERMINATED BY

 '%%EOF' (d_c)

 

 in the Query Browser 1.1.17, MySQL 5.0.15, running on W2K. Field d_c is

 defined as LONGTEXT.

 

 The following bad things happen:

 

 1. The PDF gets loaded into 553 separate rows, instead of just one record.

 2. I get an error in the Query Browser (bottom pane, under the Result

 Set pane) after completion:

 

 Row 42 was truncated; it contained more data than there were input

 columns (error 1262).

 

 3. If I change column d_c to be TEXT or BLOB, I get the same errant

 results.

 4. If I remove the FIELDS TERMINATED BY clause, I get the following

 error repeated 64 times between rows 48 and 176, where nn runs from 48

 to 176:

 

 Row nn was truncated; it contained more data than there were input

 columns (error 1262).

 

 I've done a search on the list archive through the beginning of the year

 and only found one set of three messages relating to LOAD DATA INFILE

 failing for loading PDFs (and Word and XML), but the author later solved

 it himself without telling us what he was doing wrong :(.

 

 Suggestions? Have I a big concept problem or just a little syntax one?

 

 Whil

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Load data infile fails to put entire PDF into one record

2005-11-08 Thread Whil Hentzen

Hi folks,

I want to load a set of PDFs into a MySQL 5 ISAM table. I'm using the 
following command


LOAD DATA INFILE '1037021.pdf' INTO TABLE complete FIELDS TERMINATED BY 
'%%EOF' (d_c)


in the Query Browser 1.1.17, MySQL 5.0.15, running on W2K. Field d_c is 
defined as LONGTEXT.


The following bad things happen:

1. The PDF gets loaded into 553 separate rows, instead of just one record.
2. I get an error in the Query Browser (bottom pane, under the Result 
Set pane) after completion:


Row 42 was truncated; it contained more data than there were input 
columns (error 1262).


3. If I change column d_c to be TEXT or BLOB, I get the same errant results.
4. If I remove the FIELDS TERMINATED BY clause, I get the following 
error repeated 64 times between rows 48 and 176, where nn runs from 48 
to 176:


Row nn was truncated; it contained more data than there were input 
columns (error 1262).


I've done a search on the list archive through the beginning of the year 
and only found one set of three messages relating to LOAD DATA INFILE 
failing for loading PDFs (and Word and XML), but the author later solved 
it himself without telling us what he was doing wrong :(.


Suggestions? Have I a big concept problem or just a little syntax one?

Whil


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA INFILE and SET REPLACE

2005-11-02 Thread John thegimper
Why cant i get this to work? The name dont get replaced... :P

LOAD DATA
LOCAL
INFILE '/tmp/myfile.txt'
INTO TABLE cache
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(name)
SET name = REPLACE(name, 'Coke', 'Pepsi');



-
FREE E-MAIL IN 1 MINUTE!
 - [EMAIL PROTECTED] - http://www.pc.nu

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE and SET REPLACE

2005-11-02 Thread Paul DuBois

At 23:46 +0100 11/2/05, John thegimper wrote:

Why cant i get this to work? The name dont get replaced... :P

LOAD DATA
LOCAL
INFILE '/tmp/myfile.txt'
INTO TABLE cache
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(name)
SET name = REPLACE(name, 'Coke', 'Pepsi');


Please file a bug report with a repeatable test case at
http://bugs.mysql.com -- thanks.

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data infile

2005-10-21 Thread Gleb Paharenko
Hello.



 my mysql version is 4.2.2



Are you sure that you have this version? I strongly recommend you to

use the latest release (4.1.15 now).



 The used command is not allowed with this MySQL version



Have a look here:

  http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html





'Yemi Obembe wrote:

 i'm having a problem with the load data infile command...i'm actually running 
 the script from a php script that goes this way: 

 ?php

 mysql_connect(localhost, $user, $pass);

 mysql_select_db(ng);

 $sql = load data local infile 'movies.txt' into table dir fields terminated 
 by '|';

 if($res = mysql_query($sql))

 echo loaded;

 else 

 echo mysql_error();

 ?

  

 the script and the txt file movies.txt reside on the same directory. runnign 
 the page, i get the error: The used command is not allowed with this MySQL 
 version 

 my mysql version is 4.2.2 

 what could be wrong? by the way, i'v tried changing the movies file path to a 
 full path but still doesnt change things.

 

 

 

 -

 

 A passion till tomorrow,

 Opeyemi Obembe | ng.clawz.com

 

 

 

 

 

   

 -

  Yahoo! Music Unlimited - Access over 1 million songs. Try it free.



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



load data infile

2005-10-20 Thread 'Yemi Obembe
i'm having a problem with the load data infile command...i'm actually running 
the script from a php script that goes this way: 
?php
mysql_connect(localhost, $user, $pass);
mysql_select_db(ng);
$sql = load data local infile 'movies.txt' into table dir fields terminated by 
'|';
if($res = mysql_query($sql))
echo loaded;
else 
echo mysql_error();
?
 
the script and the txt file movies.txt reside on the same directory. runnign 
the page, i get the error: The used command is not allowed with this MySQL 
version 
my mysql version is 4.2.2 
what could be wrong? by the way, i'v tried changing the movies file path to a 
full path but still doesnt change things.



-

A passion till tomorrow,
Opeyemi Obembe | ng.clawz.com






-
 Yahoo! Music Unlimited - Access over 1 million songs. Try it free.

load data infile

2005-10-20 Thread 'Yemi Obembe
i'm having a problem with the load data infile command...i'm actually running 
the script from a php script that goes this way: 
?php
mysql_connect(localhost, $user, $pass);
mysql_select_db(ng);
$sql = load data local infile 'movies.txt' into table dir fields terminated by 
'|';
if($res = mysql_query($sql))
echo loaded;
else 
echo mysql_error();
?
 
the script and the txt file movies.txt reside on the same directory. runnign 
the page, i get the error: The used command is not allowed with this MySQL 
version 
my mysql version is 4.2.2 
what could be wrong? by the way, i'v tried changing the movies file path to a 
full path but still doesnt change things.



-

A passion till tomorrow,
Opeyemi Obembe | ng.clawz.com






-
 Yahoo! Music Unlimited - Access over 1 million songs. Try it free.

Re: load data infile

2005-10-20 Thread sheeri kritzer
Yemi,

For a first debugging step, have php print out the $sql variable to a
screen.  Then copy and paste that to a mysql client, and make sure you
get the same error.  That's good to debug any script -- make sure the
query that the script is doing can actually run.  Sometimes it's in
how php parses scripts and such.

-Sheeri

On 10/20/05, 'Yemi Obembe [EMAIL PROTECTED] wrote:
 i'm having a problem with the load data infile command...i'm actually running 
 the script from a php script that goes this way:
 ?php
 mysql_connect(localhost, $user, $pass);
 mysql_select_db(ng);
 $sql = load data local infile 'movies.txt' into table dir fields terminated 
 by '|';
 if($res = mysql_query($sql))
 echo loaded;
 else
 echo mysql_error();
 ?

 the script and the txt file movies.txt reside on the same directory. runnign 
 the page, i get the error: The used command is not allowed with this MySQL 
 version
 my mysql version is 4.2.2
 what could be wrong? by the way, i'v tried changing the movies file path to a 
 full path but still doesnt change things.



 -

 A passion till tomorrow,
 Opeyemi Obembe | ng.clawz.com






 -
  Yahoo! Music Unlimited - Access over 1 million songs. Try it free.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: modifying duplicate unique keys with LOAD DATA INFILE

2005-10-06 Thread Gerhard Prilmeier

Dear Michael,

thank you for your response.


If a duplicate unique key is found when importing with LOAD DATA INFILE,



How does that happen?  I take it you are adding the imported data into an
already populated table.


True, that's what I meant to say.


Don't alter the keys for the existing data!


I see, I should not do that.


Is it the case that the imported data is simply a set of new rows with no
references to it?  If so, there's no reason to preserve the old key for 
any

of the imported rows.


Unfortunately, this is not the case. The data in the key has a meaning, it
represents a 64 bit µs timestamp. Is this bad design, would you not do
something like this? This would be one solution: Use an AUTO_INCREMENT key
for identification (I can afford those extra bytes, I just thought I
wouldn't need it.), have the timestamp be non-unique and do as you
described.

If you already have the exported data and don't want to start over, you 
can

probably accomplish the same thing with a temporary table.


I don't have the data, I can freely change whatever has to be changed. In
this case I shouldn't go for the temporary table, right? Still, thanks for
your code, I learned from that, too.

Best regards,
Gerhard Prilmeier


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



modifying duplicate unique keys with LOAD DATA INFILE

2005-10-03 Thread Gerhard Prilmeier

Hello,

I use tables with one primary key (which is the only unique key). I'd like 
to export data from such a table to a file, and then import it on another 
machine.
If a duplicate unique key is found when importing with LOAD DATA INFILE, 
MySQL gives me the choice of whether to

1. stop execution with an error
2. not import rows with duplicate unique keys (using IGNORE)

What I'd like to do is to alter the unique key (either the imported or the 
existing one) to a value that does not already exist, and then import the 
row.


Do I have to fall back on a bunch of INSERT statements to accomplish this, 
or do you see a way to get there with LOAD DATA INFILE?


Thank you very much!
Gerhard Prilmeier 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: modifying duplicate unique keys with LOAD DATA INFILE

2005-10-03 Thread Michael Stassen

Gerhard Prilmeier wrote:

Hello,

I use tables with one primary key (which is the only unique key). I'd 
like to export data from such a table to a file, and then import it on 
another machine.
If a duplicate unique key is found when importing with LOAD DATA INFILE, 


How does that happen?  I take it you are adding the imported data into an 
already populated table.



MySQL gives me the choice of whether to
1. stop execution with an error
2. not import rows with duplicate unique keys (using IGNORE)


It can also replace the existing rows, but that's not what you want.

What I'd like to do is to alter the unique key (either the imported or 
the existing one) to a value that does not already exist, and then 
import the row.


Don't alter the keys for the existing data!  That path leads to trouble. 
Usually, other tables will refer to rows in this table by key.  Changing 
keys breaks relationships.


Is it the case that the imported data is simply a set of new rows with no 
references to it?  If so, there's no reason to preserve the old key for any 
of the imported rows.  Instead, we just assign new keys to all the imported 
rows.  This should be relatively easy if the primary key on the destination 
table is AUTO_INCREMENT.  In that case, the simplest solution would be to 
not export the keys in the first place.  Then new keys will be assigned 
automatically when you leave out the key column during the import into the 
destination table.  Somethng like


  SELECT col1, col2, ...
  INTO OUTFILE '/tmp/export.txt'
  FROM export_table;

  LOAD DATA INFILE '/tmp/export.txt'
  INTO TABLE dest_table
  (col1, col2, ...);

where col1, col2, ... is all the columns except the key, or

  SELECT NULL, col1, col2, ...
  INTO OUTFILE '/tmp/export.txt'
  FROM export_table;

  LOAD DATA INFILE '/tmp/export.txt'
  INTO TABLE dest_table;

where NULL is in the position of the key column.

Do I have to fall back on a bunch of INSERT statements to accomplish 
this, or do you see a way to get there with LOAD DATA INFILE?


If you already have the exported data and don't want to start over, you can 
probably accomplish the same thing with a temporary table.  Something like


  # make a temporary table to match dest_table
  CREATE TEMPORARY TABLE expdata SELECT * FROM dest_table WHERE 0;

  # change the temp table to allow NULLs in the key column
  ALTER TABLE expdata CHANGE id id INT;

  # import the data int the temp table
  LOAD DATA INFILE '/tmp/export.txt'
  INTO TABLE expdata;

  # change the key column to all NULLs
  UPDATE expdata SET id = NULL;

  # copy the temp table rows into dest_table, where new auto_inc
  # keys will replace the NULLs in the imported key column
  INSERT INTO dest_table SELECT * FROM expdata;

  # clean up
  DROP TABLE expdata;


Thank you very much!
Gerhard Prilmeier


If this isn't what you need, I think we'll need more details about your 
tables and what you are trying to accomplish.


Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA INFILE Syntax Error

2005-09-26 Thread Jason Ferguson
I am attempting to import a large file with data in this format:

1923158|GA|1996 Olympic Yachting
Cauldron|park|Chatham|13|051 |320446N|0810502W|32.07944|-
81.08389Savannah

With this command:
LOAD DATA LOCAL INFILE 'C:\\PHP\\FL_DECI.txt'
INTO TABLE locations2
FIELDS TERMINATED BY '|'
LINES TERMINATED BY \r
(@d,@d,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@=
d);

The @d is a dummy variable for information I do not need to import. The
table structure looks like this:

+-+-+--+-+-++
| Field | Type | Null | Key | Default | Extra |
+-+-+--+-+-++
| ID | int(11) | | PRI | NULL | auto_increment |
| STATE | int(11) | YES | | NULL | |
| LOCNAME | varchar(25) | YES | | NULL | |
| LOCTYPE | varchar(10) | YES | | NULL | |
| COUNTY | int(11) | YES | | NULL | |
| CTRLAT | float | YES | | NULL | |
| CTRLNG | float | YES | | NULL | |
+-+-+--+-+-++

And here is the error I get:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that
corresponds to your MySQL server version for the right syntax to use near
'@d,@d
,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@d)' at
line
5

This error is driving me nuts! Any help would be appreciated, as this is a
rather large file.

Jason


Re: LOAD DATA INFILE Syntax Error

2005-09-26 Thread Robert L Cochran

http://mirror.tomato.it/mysql/doc/mysql/en/load-data.html

Bob Cochran


Jason Ferguson wrote:


I am attempting to import a large file with data in this format:

1923158|GA|1996 Olympic Yachting
Cauldron|park|Chatham|13|051 |320446N|0810502W|32.07944|-
81.08389Savannah

With this command:
LOAD DATA LOCAL INFILE 'C:\\PHP\\FL_DECI.txt'
INTO TABLE locations2
FIELDS TERMINATED BY '|'
LINES TERMINATED BY \r
(@d,@d,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@=
d);

The @d is a dummy variable for information I do not need to import. The
table structure looks like this:

+-+-+--+-+-++
| Field | Type | Null | Key | Default | Extra |
+-+-+--+-+-++
| ID | int(11) | | PRI | NULL | auto_increment |
| STATE | int(11) | YES | | NULL | |
| LOCNAME | varchar(25) | YES | | NULL | |
| LOCTYPE | varchar(10) | YES | | NULL | |
| COUNTY | int(11) | YES | | NULL | |
| CTRLAT | float | YES | | NULL | |
| CTRLNG | float | YES | | NULL | |
+-+-+--+-+-++

And here is the error I get:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that
corresponds to your MySQL server version for the right syntax to use near
'@d,@d
,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@d)' at
line
5

This error is driving me nuts! Any help would be appreciated, as this is a
rather large file.

Jason

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE Syntax Error

2005-09-26 Thread Jasper Bryant-Greene

Robert L Cochran wrote:

http://mirror.tomato.it/mysql/doc/mysql/en/load-data.html


There's a lot to read there for one small paragraph, so from the above link:

Before MySQL 5.0.3, the column list must contain only names of columns 
in the table being loaded, and the SET  clause is not supported. As of 
MySQL 5.0.3, the column list can contain either column names or user 
variables, and the SET clause is supported. This enables you to assign 
input values to user variables, and then perform transformations on 
those values before assigning the result to columns.


Bob: please don't top-post.

Jasper


Jason Ferguson wrote:


I am attempting to import a large file with data in this format:

1923158|GA|1996 Olympic Yachting
Cauldron|park|Chatham|13|051 |320446N|0810502W|32.07944|-
81.08389Savannah

With this command:
LOAD DATA LOCAL INFILE 'C:\\PHP\\FL_DECI.txt'
INTO TABLE locations2
FIELDS TERMINATED BY '|'
LINES TERMINATED BY \r
(@d,@d,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@= 


d);

The @d is a dummy variable for information I do not need to import. The
table structure looks like this:

+-+-+--+-+-++
| Field | Type | Null | Key | Default | Extra |
+-+-+--+-+-++
| ID | int(11) | | PRI | NULL | auto_increment |
| STATE | int(11) | YES | | NULL | |
| LOCNAME | varchar(25) | YES | | NULL | |
| LOCTYPE | varchar(10) | YES | | NULL | |
| COUNTY | int(11) | YES | | NULL | |
| CTRLAT | float | YES | | NULL | |
| CTRLNG | float | YES | | NULL | |
+-+-+--+-+-++

And here is the error I get:

ERROR 1064 (42000): You have an error in your SQL syntax; check the 
manual

that
corresponds to your MySQL server version for the right syntax to use near
'@d,@d
,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@d)' 
at

line
5

This error is driving me nuts! Any help would be appreciated, as this 
is a

rather large file.

Jason

 






--
Jasper Bryant-Greene
Freelance web developer
http://jasper.bryant-greene.name/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE Syntax Error

2005-09-26 Thread Jason Ferguson
I've been over that page before posting, with no luck. It might be an
obvious error in syntax, but I can't figure it out.

Jason

On 9/26/05, Robert L Cochran [EMAIL PROTECTED] wrote:

 http://mirror.tomato.it/mysql/doc/mysql/en/load-data.html

 Bob Cochran


 Jason Ferguson wrote:

 I am attempting to import a large file with data in this format:
 
 1923158|GA|1996 Olympic Yachting
 Cauldron|park|Chatham|13|051 |320446N|0810502W|32.07944|-
 81.08389Savannah
 
 With this command:
 LOAD DATA LOCAL INFILE 'C:\\PHP\\FL_DECI.txt'
 INTO TABLE locations2
 FIELDS TERMINATED BY '|'
 LINES TERMINATED BY \r

 (@d,@d,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@=
 d);
 
 The @d is a dummy variable for information I do not need to import. The
 table structure looks like this:
 
 +-+-+--+-+-++
 | Field | Type | Null | Key | Default | Extra |
 +-+-+--+-+-++
 | ID | int(11) | | PRI | NULL | auto_increment |
 | STATE | int(11) | YES | | NULL | |
 | LOCNAME | varchar(25) | YES | | NULL | |
 | LOCTYPE | varchar(10) | YES | | NULL | |
 | COUNTY | int(11) | YES | | NULL | |
 | CTRLAT | float | YES | | NULL | |
 | CTRLNG | float | YES | | NULL | |
 +-+-+--+-+-++
 
 And here is the error I get:
 
 ERROR 1064 (42000): You have an error in your SQL syntax; check the
 manual
 that
 corresponds to your MySQL server version for the right syntax to use near
 '@d,@d
 ,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@d)'
 at
 line
 5
 
 This error is driving me nuts! Any help would be appreciated, as this is
 a
 rather large file.
 
 Jason
 
 
 




Re: LOAD DATA INFILE Syntax Error

2005-09-26 Thread Jason Ferguson
Okay, now I get it. I was using the 4.1 series. Looks like an upgrade is in
order.

Jason

On 9/26/05, Jasper Bryant-Greene [EMAIL PROTECTED] wrote:

 Robert L Cochran wrote:
  http://mirror.tomato.it/mysql/doc/mysql/en/load-data.html

 There's a lot to read there for one small paragraph, so from the above
 link:

 Before MySQL 5.0.3, the column list must contain only names of columns
 in the table being loaded, and the SET clause is not supported. As of
 MySQL 5.0.3, the column list can contain either column names or user
 variables, and the SET clause is supported. This enables you to assign
 input values to user variables, and then perform transformations on
 those values before assigning the result to columns.

 Bob: please don't top-post.

 Jasper

  Jason Ferguson wrote:
 
  I am attempting to import a large file with data in this format:
 
  1923158|GA|1996 Olympic Yachting
  Cauldron|park|Chatham|13|051 |320446N|0810502W|32.07944|-
  81.08389Savannah
 
  With this command:
  LOAD DATA LOCAL INFILE 'C:\\PHP\\FL_DECI.txt'
  INTO TABLE locations2
  FIELDS TERMINATED BY '|'
  LINES TERMINATED BY \r
 
 (@d,@d,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@=
 
  d);
 
  The @d is a dummy variable for information I do not need to import. The
  table structure looks like this:
 
  +-+-+--+-+-++
  | Field | Type | Null | Key | Default | Extra |
  +-+-+--+-+-++
  | ID | int(11) | | PRI | NULL | auto_increment |
  | STATE | int(11) | YES | | NULL | |
  | LOCNAME | varchar(25) | YES | | NULL | |
  | LOCTYPE | varchar(10) | YES | | NULL | |
  | COUNTY | int(11) | YES | | NULL | |
  | CTRLAT | float | YES | | NULL | |
  | CTRLNG | float | YES | | NULL | |
  +-+-+--+-+-++
 
  And here is the error I get:
 
  ERROR 1064 (42000): You have an error in your SQL syntax; check the
  manual
  that
  corresponds to your MySQL server version for the right syntax to use
 near
  '@d,@d
 
 ,LOCNAME,LOCTYPE,@d,STATE,COUNTY,@d,@d,LNG,LAT,@d,@d,@d,@d,@d,@d,@d,@d)'
  at
  line
  5
 
  This error is driving me nuts! Any help would be appreciated, as this
  is a
  rather large file.
 
  Jason
 
 
 
 
 

 --
 Jasper Bryant-Greene
 Freelance web developer
 http://jasper.bryant-greene.name/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]




Loading Decimal Values with load data infile

2005-09-14 Thread Thorsten Moeller
Hi,

i am trying to load a csv file with LOAD DATA INFILE. This File
contains columns with decimal values with the german comma instead of
the dot as separator (e.g. 3.20 is 3,20). Is there an option to handle
this during or immediately before LOAD with MySQL Commands/Tools. Now we
only see values where the values on the right of the comma have been cut
off.

I will be glad for any advice!

Thorsten



-- 
http://www.fastmail.fm - The way an email service should be


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Loading Decimal Values with load data infile

2005-09-14 Thread Thorsten Moeller
Hi,

i am trying to load a csv file with LOAD DATA INFILE. This File
contains columns with decimal values with the german comma instead of
the dot as separator (e.g. 3.20 is 3,20). Is there an option to handle
this during or immediately before LOAD with MySQL Commands/Tools. Now we
only see values where the values on the right of the comma have been cut
off.

I will be glad for any advice!

Thorsten

-- 
http://www.fastmail.fm - One of many happy users:
  http://www.fastmail.fm/docs/quotes.html


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   >