Thanks Peter, that will help.
I am wondering if the use of anonymous hash and array
may make more sense here than doing a bunch of `grep`.
Can somebody show me how I would use it here?

--- Peter Eisengrein <[EMAIL PROTECTED]> wrote:

> > 
> > I have this program below that work correcly but
> the
> > performance is slow if the files are big. How can
> I
> > write a program to do this instead the one I
> wrote.
> > 
> 
> Without getting into too much detail there are
> several things that can help,
> all are related to the overall architecture of the
> code. Generally speaking,
> you do the same thing over and over. For instance,
> you glob "$path/*" for
> each subroutine. Assuming the contents of $path do
> not change, do this once
> and put the results in an array. 
> 
> But, more importantly, you re-read all your files
> for each subroutine
> ("while(<SCH>)"). Instead of building your
> sub-routines around reading for
> each section/ page/whatever, try reading each file
> only once and do each of
> those functions as you go instead of repeating it
> for each task. I think
> this is where you are losing most of your
> performance.
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
_______________________________________________
Perl-Win32-Users mailing list
Perl-Win32-Users@listserv.ActiveState.com
To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs

Reply via email to