[Sorry for chiming in late here...]

> On Wed, Nov 29, 2000 at 02:02:31PM -0500, Dan Sugalski wrote:
> > I'm really thinking that the lexer, parser, and tokenizer can't be anywhere 
> > near as separate as we'd like.

Simon Cozens <[EMAIL PROTECTED]> wrote:

> This would *honestly* be my preference; I think it would be far easier to
> write and understand than anything else. So long as it's nicely re-entrant
> we should be fine. My only worry is, how do we reconcile this with the
> idea of Perl having an easily modifiable grammar and being a good
> environment for little-language stuff?

I share that worry, too, but I have another concern (not a full worry; just
a concern.  ;)

I believe that to do a true port to the JVM (e.g., supporting
eval($STRING)), we'll need to implement a bootstrapping parser for the
parser code in Java.

My concern is that the more integrated the lexer, parser and tokenizer are
integrated, the harder it will be to reimplement in other languages.

Can anyone address this concern?  Will it still be easy to reimplement the
whole "mutant beast" anew as needed?  

At this point, since the discussion still quite abstract (i.e., we have no
APIs ;), I am unable to talk myself down that this won't be a problem.  Can
someone else talk me down?  ;)



(BTW, I'd like to be able to do the same thing in Scheme, too, for better
Guile integration, but that's more pie-in-the-sky.  ;)


-- 
Bradley M. Kuhn  -  http://www.ebb.org/bkuhn

Reply via email to