On Mon, Nov 27, 2000 at 04:41:34PM -0500, Dan Sugalski wrote:
> Okay, here's a question for those of you with more experience at parsers 
> than I have. (Which would be about everyone)
> 
> Is there any reasonable case where we would need to backtrack over 
> successfully parsed source and redo the parsing? I'm not talking about the 
> case where regular expressions run over text and ultimately fail, but 
> rather cases where we need to chuck out part of what we have and restart?

    Disclaimer:  I'm not sure whether you're asking about lexing, tokenizing,
both, or neither.

    In current perl, we do something _like_ that to disambiguate certain
situations.  Grep the sources for `expectation'.  I wouldn't be surprised
if something like this also goes on with, e.g., multi-line regexen.

    Oh, you said `reasonable'.

    Peace,
* Kurt Starsinic ([EMAIL PROTECTED]) ---------------- Senior Software Architect *
|      `It is always possible to aglutenate multiple separate problems      |
|       into a single complex interdependent solution.  In most cases       |
|       this is a bad idea.' - Ross Callon                                  |

Reply via email to