> I have a one big text file and also I have some
> set of strings to be replaced by another set of
> strings. Currently, I am reading each line of
> the file, and replacing one set of strings by
> another set of strings, one after another. Is
> there any efficient way of doing this? The data
> is so huge that this job takes around 7Hrs of
> total time on Sun Ultra 10.
What is the size of the input file? Gigabytes?
I presume you are following the suggestions of
reading from one file, and writing the
modifications to another - that seemed obvious.
7 hours, however, seems a rather long time.
May I ask if those sets of strings changes from
one execution to another? E.g.
my $word = "WORD";
while (<>) {
s/$word/lie/g; #Slow
}
is slow because that $word forces the regex to
be recompiled each time through. The best way
to solve this problem is to create a Perl
script on the fly and run using eval $script;
I hope this helps,
Jonathan Paton
__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]