James Edward Gray II wrote:
> On Jan 2, 2004, at 10:10 AM, Paul Kraus wrote:
> 
>>> Don't do that.  The foreach reads the whole file into memory and
>>> then walks it line by line.  If we just want one at a time, let's
>>> fetch them that way
>> 
>> I don't agree with this.
> 
> And I don't understand this.  ;)
> 
>> If you are working with a file that is as small as
>> the one included seems to be easier to just load up some data
>> structures. I understand writing efficient code is always best but
>> for a quickie script like this I wouldn't be to concerned with the
>> 1/100th of a ms that your going to shave off but not dumping the
>> contents to memory. 
> 
> If you're going to work with the lines one at a time anyway, what
> exactly is the advantage of using the probably slower and definitely
> more memory wasteful foreach()?

While the foreach is certainly more wasteful, why in the world would you
re-initialize and re-open the file multiple times? Why not just open the
file once and iterate over the file, comparing each line of the file to
each of the keys in the %input hash?

# Original
#foreach $gene (sort keys %genedex) {
#  while ($line=<DATA>) {

# My idea
while (defined(my $line = <DATA>)) {
  foreach my $gene (sort keys %genedex) {
    if ($line =~ /$gene/) {
      ($probe_id) = split(/\s/,$line,2);
        print "$gene\t$probe_id\t$genedex{$gene}\n";
    }
  }
}



I'm sure there are other ways to improve on this idea, but right off the
bat I can't see justifying repeatedly heading back to disk to
re-open/rewind the file...

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to