James Edward Gray II wrote:
>
> On Jan 2, 2004, at 10:46 AM, Ed Christian wrote:
>
> > James Edward Gray II wrote:
> >> On Jan 2, 2004, at 10:10 AM, Paul Kraus wrote:
> >>
> >>>> Don't do that.  The foreach reads the whole file into memory and
> >>>> then walks it line by line.  If we just want one at a time, let's
> >>>> fetch them that way
> >>>
> >>> I don't agree with this.
> >>
> >> And I don't understand this.  ;)
> >>
> >>> If you are working with a file that is as small as
> >>> the one included seems to be easier to just load up some data
> >>> structures. I understand writing efficient code is always best but
> >>> for a quickie script like this I wouldn't be to concerned with the
> >>> 1/100th of a ms that your going to shave off but not dumping the
> >>> contents to memory.
> >>
> >> If you're going to work with the lines one at a time anyway, what
> >> exactly is the advantage of using the probably slower and definitely
> >> more memory wasteful foreach()?
> >
> > While the foreach is certainly more wasteful, why in the world would
> > you
> > re-initialize and re-open the file multiple times? Why not just open
> > the
> > file once and iterate over the file, comparing each line of the file to
> > each of the keys in the %input hash?
>
> Your way is certainly better, if output order doesn't matter.  The
> original code checked the first key against the file, then second, then
> third.  Your suggestion could find the sixth index before it finds the
> first.  If the order doesn't matter though, I agree that your
> suggestion would be boatloads more efficient.

Except that there's no need to reopen the file. A simple

  seek DATA, 0, 0

will do the trick with very little overhead.

Rob



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to