At 04:50 PM 11/27/00 -0500, Kurt D. Starsinic wrote:
>On Mon, Nov 27, 2000 at 04:41:34PM -0500, Dan Sugalski wrote:
> > Okay, here's a question for those of you with more experience at parsers
> > than I have. (Which would be about everyone)
> >
> > Is there any reasonable case where we would need to backtrack over
> > successfully parsed source and redo the parsing? I'm not talking about the
> > case where regular expressions run over text and ultimately fail, but
> > rather cases where we need to chuck out part of what we have and restart?
>
>     Disclaimer:  I'm not sure whether you're asking about lexing, tokenizing,
>both, or neither.

Both, really. Mainly the case where we've already tokenized some code and 
thrown the tokens into the syntax tree, but then decide we need to yank 
them back out because we changed our minds.

>     In current perl, we do something _like_ that to disambiguate certain
>situations.  Grep the sources for `expectation'.  I wouldn't be surprised
>if something like this also goes on with, e.g., multi-line regexen.
>
>     Oh, you said `reasonable'.

:)

The big reason I'm thinking about this is I'm trying to decide how much 
text we need to buffer, and at what point we can throw down a marker and 
declare "I don't care about anything before this--it's done". Mainly 
because I'm contemplating how to deal with both perl-as-shell and perl 
source with no determinate length (like if we're reading in perl code from 
a socket or something, where we might not want to wait until it's all in 
before we start chewing on it)

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to