siegfried wrote:
> Thanks, but if I am piping from stdin to stdout I see two problems:
>
> (1) how do I implement the -n flags that tell me the line number and
> file name where the matches are

Well, as long as you're only piping one file at a time, the line
number part isn't a problem; but I see your point otherwise.  OK;
forget the suggestion about a separate filter.

> (2) how do I make two passes: one to strip out the comments (and
> preserve the original line breaks so I don't screw up the line
> numbers) and the other to actually search for what I am looking for?

If you slurp the entire file into a single string, you need to make
two passes as you describe.  Additionally, this makes the regexes used
to purge comments even messier, since you need to modify them to
preserve newlines within the comments.

If you use the general approach that I outlined, where you're tackling
each file on a line-by-line basis, you don't need to make two passes,
per se.  Instead, write any code or quotes that you find on a line
into a string buffer as you find them, and apply the grep's regex to
that buffer whenever you finish filtering a line:

  #pseudocode
  foreach $file (@ARGS) {
    if (open FILE, $file) {
      my $line = 0;
      my $context = 'code';
      while (<FILE>) {
        $line++;
        ($text, $comment) = filter($_, \$context);
        print "$file $line: $_\n" if $text =~ $pattern;
      }
      close FILE;
    }
  }

The magic is in how you write 'filter()'.  See my previous post for a
summary of the logic behind it.

> How do I read an entire file into a string? I know how to do it record
> by record. Is there a more efficient way?

If you want to slurp an entire file into a single string:

  $string = join '', <FILE>;

<FILE> will dump its contents into an anonymous list, which will then
be joined together seamlessly.

-- 
Jonathan "Dataweaver" Lang

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to