"Eric M. Ludlam" <[EMAIL PROTECTED]> writes:

>   On the make it easy side, Emacs Lisp is just not a great language
> for making an easy-to-read lexical analyzer.  The macros let you write
> and mix individual analyzers in a convenient high-level way.

My understanding was that Common Lisp has a configurable reader that
is flexible enough so that one does not need to use lexers.  Is this
true?

I wonder if it would be a workable approach to augment Emacs to have a
better reader, then to use that as the lexer.

I don't have practical experience with building parsers.
Theoretically, one wouldn't need a lexer, just something that returns
the next character from the input would be sufficient and the rest
could be done from the grammar.  (I mean that one doesn't need lex and
could do everything in yacc instead.)  But that makes parsers
difficult to write, and also probably slow.  So one does need lexers.
But the theory seems to imply that the lexers don't need to be
all-powerful: if the lexer is too stupid, then one can still do it
from the grammar.

So would it work in practice to "just" use an augmented Lisp reader as
the lexer?

Kai

Reply via email to