thanks for the reply!  I am not too concerned about cutting out the columns
as I may need to other information for later.
I was just wondering does it make a difference if both of the columns that 
I am interested have entries that are NOT unique?
Also,  at what point does one need a parser or use of a hash?
thanks!

Eric Bergen <[EMAIL PROTECTED]> wrote:
awk is probably the best tool I can think of for cutting columns out
of a text file. Something like
awk -F \\t '{ print $2 "," $3 }' my_file
could be used to pick the second and third column out of a file prior
to importing it.

-Eric

On 4/18/05, newbie c wrote:
> Hi,
> 
> I am about to create a database and there are a number of files that I need
> to load into the database. They are tab delimited files. One of the files 
> contains about 4 or 5 columns. I am only interested in the second and the 
> third column right now but I will load the whole table. The values in the 
> second column can occur more than once in the file.
> As well the values in the third column can occur more than once in the file.
> 
> Another file that I want to load as a table into the databse only contains two
> column and one column will be unique while the second column will have
> duplicate values in the file.
> 
> My question is when should I use mysqlimport, or load data and when should I 
> write my own Perl parser to help load the table?
> What criteria would be needed to decide to read a file into a hash?
> 
> Also, if I decide to use mysqlimport is there anything I should watch out for?
> 
> thanks!
> 
> 
> ---------------------------------
> Post your free ad now! Yahoo! Canada Personals
> 
> 


-- 
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net



---------------------------------
Post your free ad now! Yahoo! Canada Personals

Reply via email to