On Mon, May 3, 2010 at 10:53 AM, erik quanstrom <quans...@quanstro.net> wrote:
>> It's always been easier for me to use python's/perl's regular
>> expressions when I needed to process a text file than to use plan9's.
>> For simple things, e.g. while editing an ordinary text in acme/sam,
>> plan9's regexps are just fine.
>
> i find it hard to think of cases where i would need
> such sophistication and where tokenization or
> tokenization plus parsing wouldn't be a better idea.

A lot of the `sophisticated' Perl I've seen uses some horrible regexes
when really the job would have been done better and faster by a
simple, job-specific parser.

I've yet to find out why this happens so much, but I think I can
narrow it to a combination of ignorance, laziness, and perhaps that
all-too-frequent assumption `oh, I can do this in 10 lines with perl!'
I guess by the time you've written half a parser in line noise, it's
too late to quit while you're behind.

>
> for example, you could write a re to parse the output
> of ls -l and or ps.  but awk '{print $field}' is so much
> easier to write and read.
>
> so in all, i view perl "regular" expressions as a tough sell.
> i think they're harder to write, harder to read, require more
> and more unstable code, and slower.
>
> one could speculate that perl, by encouraging a
> monolithic, rather than tools-based approach;
> and cleverness over clarity made perl expressions
> the logical next step.  if so, i question the assumptions.
>
> - erik
>
>

Reply via email to