That's not necessarily a bad idea. One example of why is in a real world
situation at work where I receive a "balances" file that is fixed field. The
company code, negative balances indicators are wrong, and the file numbers
need converted based on a junction table I have in the SQL database the data
is being converted into. In this case, I loaded the 14000 + line file
entirely into an array and process it that way - writing the output out to
another file. In this case, it is exactly what I need, and it will not kill
the machine later because I am only doing this during this database
conversion - but it went through several times of processing before the
final conversion.

However, the next conversion had files that were over 4G. In that case I
used a combination of DBD::CSV, or processing a few hundred lines at a time
(which is still significantly slower).

In this situation, as in most, it depends on what you need. Your point is
well taken, but sometimes the best way is to load the entire file into an
array.

Steve Howard

-----Original Message-----
From: Evgeny Goldin (aka Genie) [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 12, 2001 7:03 PM
To: [EMAIL PROTECTED]
Cc: Stephen Henderson
Subject: Re: use of split command



> I am trying to read a quite large file (ca 13000 lines) directly into an
> array (for speed)

Sorry, it's a bad idea.
One day your file will be 1 GB size and @ets=<ACCS> will kill your PC
trying to load the whole gig into the memory ..

while ( <HANDLE> ){..}

is the best way for reading large files, I think.

Reply via email to