Re: Support for interactive interpreters...

2001-01-17 Thread Simon Cozens

On Wed, Jan 17, 2001 at 11:03:05AM -0200, Branden wrote:
 I work with Perl and I also work with Tcl, and one thing I actually like
 about Tcl is that it's interactive like a shell, i.e. it gives you a prompt,
 where you type commands in and, if you type a whole command by the end of
 the line, it executes it, otherwise, it gives you another prompt and keeps
 reading more lines until the whole command is typed, when it's executed.

Sounds like http://dev.perl.org/rfc/184.html to me.

-- 
Do not go gentle into that good night/Rage, rage against the dying of the light



Re: Backtracking through the source

2000-11-30 Thread Simon Cozens

On Wed, Nov 29, 2000 at 02:57:23PM -0500, Dan Sugalski wrote:
  My only worry is, how do we reconcile this with the idea of
 Perl having an easily modifiable grammar and being a good environment for
 little-language stuff?
 
 That's a good question, and it depends on what Larry's thinking of for 
 little languages. Smacking the perl parser around enough to handle, say, 
 something C or Pythonish shouldn't be a huge hassle. Making it handle 
 something Lisp-like, though, is another matter entirely.
 
I think my worry was more general than that: if we've got a big monolithic
parsebeast which is making lots of Perlish decisions, at what levels do we
allow the user to modify that? Do they get to replace the whole thing and 
implement their own tokeniser, lexer and parser, or can we find a way to apply
"hooks" to replaceable components?

-- 
"Dogs believe they are human.  Cats believe they are God."



Re: Backtracking through the source

2000-11-30 Thread Simon Cozens

On Thu, Nov 30, 2000 at 11:54:31AM +, Simon Cozens wrote:
 I categorically do *NOT* want perl6-internals to turn into a basic course in
 compiler design, purely for the benefit of those who know nothing at all about
 what they're trying to achieve. I'd like Perl 6 to be a masterwork, and
 masterworks require master craftsmen. If you want to partake in compiler
 design, it makes more than a little sense to find out how to do so.
 
Guh, I shouldn't have said that, because I know exactly what'll happen now:
people will accuse me of being elitist and reactionary and trying to shut
people out who want to help.

Outside my room, some constructors are building a school. I'd like to help.
I'd like to take part in the building; it'll be great! We'll have lots of
classrooms, and a playground, and I think there should be an armoury, because
all good schools have an armoury. Oh, that's military bases; but that doesn't
matter, it should have one anyway.

But for some reason, the constructors don't think much of my plans. They're
saying something about the need for "foundations" or something or other that I
can't understand. Look, it's not my fault I have no knowledge of engineering!
I *really* want to help, and they're trying to exclude me. Horrible, elitist
bastards!

Are they? Are they being elitist? Of course not. Are they trying to exclude
me? *No*. By my own lack of knowledge and utility, *I* *exclude* *myself*, and
no amount of wanting to help makes up for that.

Think about it.

-- 
Almost any animal is capable learning a stimulus/response association,
given enough repetition.
Experimental observation suggests that this isn't true if double-clicking
is involved. - Lionel, Malcolm Ray, asr.



Re: Backtracking through the source

2000-11-28 Thread Simon Cozens

On Tue, Nov 28, 2000 at 06:58:57PM +, Tom Hughes wrote:
 I didn't say that having infinite lookahead was better than allowing
 backtracking. I simply said that the two were equivalent and that any
 problem that can be solved by one can be solved by the other.

Fair enough.

 That's quite a nasty example for a number of reasons. Firstly you
 might have to back up and reparse a very large amount of code as the
 subroutine definition could be a very long way away from the print
 statement.

You wouldn't have to reparse it all. You'd have to insert the new information
into the parse and see how that changes things. It'd probably only change a
very localised area, a single statement per occurence at most.
 
 Secondly in order to know that you needed to back up you'd have to
 remember that you hadd had to guess that foo was a filehandle but
 that it might also be a subroutine and it raises a whole serious of
 questions about what other similar things you might need to remember.
 
Parsing Perl is not easy. :) At some points, you have to say, well, heck, I
don't *know* what this token is. At the moment, perl guesses, and it guesses
reasonably well. But guessing something wrongly which you could have got right
if you'd read the next line strikes me as a little anti-DWIM. 

In a sense, though, you're right; this is a general problem. I'm currently
trying to work out a design for a tokeniser, and it seems to me that there's
going to be a lot of communicating of "hints" between the tokeniser, the lexer
and the parser. 

The other alternative is to completely conflate the three, which would work
but I think people would lose their minds.

Take, for instance:

${function($value)}[$val]

Now, how on earth do I split this into tokens? Do I say:

/${/ - and expect some stuff which will resolve to a variable name or
   array reference, followed by a }

If we go that way, we're passing lots of hints to both the lexer and the
parser.

/${[^}]+}/ and then /\[[^]]+\]/

If we do that, we have to keep state between the two tokens so that we don't
make [$val] into a reference constructor and stuff up the parser.

/^${([^}]+)}\[([^\])]/ - Deference $1 as an array, take value $2.

If we do *that*, then we're already being tokeniser, lexer and parser rolled
into one.

Parsing Perl is hard. Trust me. :)

-- 
MISTAKES:
It Could Be That The Purpose Of Your Life Is Only To Serve As
A Warning To Others

http://www.despair.com