>
>Price, Jason wrote:
>> I'm trying to optimize a script used for processing large text log
>> files (around 45MB).  I think I've got all the processing fairly well
>> optimized, but I'm wondering if there's anything I can do to speed up
>> the initial loading of the file.

oh the pain and suffering of regular expressions!!  oh the forests of
slashes, both forward and backwards!!  but reading these things is only the
start of the pain!!  

imagine looking for something like m/BOB\d*\wyoyo/g # but what if in that
 line there were SEVERAL cases where a number was followed by a letter? that
greedy little '*' would take your processor through a little roller coaster
ride!!    pain.  :P  well, i've been learning and asking about that too->  
by slurping the WHOLE file into memory at once (do you have room to but a 45
megabyte file into memory?)  the speed of the processing went up- but lots
and LOTS of time has been saved (still working on it though :)  ) by doing
this::    m/BOB\d*?\wyoyo/g #  the '?' tells the regular expression to use
'*' to look at only the next nearest match - lot's of processor time saved!!

but even better than that??

perldoc -q regex optomize

or some other string to search for-  also www.perldoc.com -> type perlre in 
its search box.


willy

ps -  regular expressions are neat, if braindamaging.....

pps- i just reread the question and realized that he was interested in
        reading the file in faster!!  ok::

undef$/;        #kills the 'line delimiter'-> maybe "local $/= undef;" in 
                #a subroutine is safer

open FH, "filetoparse";

$foo = <FH>; #one variable holding 45 megabytes of stuff! is that ok? or is
                 # $foo not quite a regular ol' scaler?

while (/$regular_expression_pattern/){
        do_stuff_with ($1,$2,$3,$4)

}

# have fun!!

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to