"West, William M" wrote:

> by slurping the WHOLE file into memory at once (do you have room to but a 45
> megabyte file into memory?)  the speed of the processing went up-

Highly unlikely.  There is no real speed advantage with Perl in slurping a
file.  The only good reason I can think of is if there is some reason that all
data in the file must be cross-related in some way, and if the file data itself
is already stored in some very condensed manner.

> ...

> pps- i just reread the question and realized that he was interested in
>         reading the file in faster!!  ok::
>
> undef$/;        #kills the 'line delimiter'-> maybe "local $/= undef;" in
>                 #a subroutine is safer

Nope.  Reading it line by line is much more likely to speed up the process.
Also, simplifying regexes will, even if it requires more regular expressions be
run on each line.  Regular expressions with fewer variables will run faster,
because they will have fewer decisions to make.

Joseph


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to