Dan Sugalski <[EMAIL PROTECTED]> wrote:

 > (Keeping in mind the input is source, and
 > the output is a syntax tree)

Will you be my hero?

Or

Your clarity is sincerely appreciated.

Ok, _from_ the books on the reading list, I'm seeing no precedent for a
parser/lexer/tokenizer that uses multiple input "languages". Yes I know
that GCC does F77/ASM/C/C++ but I'm not sure those completely relate.
Simon (?) brought up the problem that we might end up with a monolithic
beastie, and my concern is simplicity for the (advanced) user to come up
with syntax variants.

Reading what you say, "parser/lexer/tokenizer" (multiple things) "part"
(one thing). That's got to be a stumbling block of some kind. If we
separate out the "parser" (or look at each of them individually - which
might further help clarity), we have something that might be "plug and
pl...", er..., "plug and code". But then, if we're using flex/bison
(lexx/yacc), that doesn't sufficiently satisfy what I'm looking for in
end-user usability. If joe hacker-wannabe wants to program in Java, he
shouldn't have to write flex spex, mess with gobs of perlguts, or,
hopefully, go outside the Perl language. I see this as leaving two
options: having a perl layer above the parser, or replacing the parser
with perl.

If I understand the latter and our goals, this is where we're going,
although I'll still voice that I think a simpler Perl abstraction on top
of it all that a user can define as easily as a good non-compiled module
would be highly beneficial. It would give us a single parser that we plug
into.

No semantics, guys, please. I'm trying to figure out how we're going to
allow multiple input "languages" with a single, small parser. Maybe not a
source filter so to speak, but a layer above what is normal for this
collection of operations. My concern is to give users the ability to
create innovation within Perl without being perl masters.

p


Reply via email to