Gaspard Bucher wrote:
> PS: There is another reason, aside from aesthetics and simpler grammar
> to filter white spaces inside the tokenizer: you avoid all the parser
> conflicts you could get with "empty | or space" rules.
>
> 2007/10/24, Gaspard Bucher <[EMAIL PROTECTED]>:
>   
>>> Gaspard Bucher <[EMAIL PROTECTED]> wrote:
>>>       
>>>> I do not understand why lemon waits for one more token when it has
>>>> enough information to reduce.
>>>>
>>>> I want to recognize :
>>>> foo = Bar()
>>>> when the token CLOSE_PAR is received, not when an extra token is parsed..
>>>>
>>>> How can I avoid lemon waiting for the extra token before reducing ?
>>>>
>>>>         
>>> I don't think you can.  Why do you want to?  Why not just go
>>> ahead and send it the next token?
>>>       
>> Most people find a way around this problem using white-space. This
>> could be a solution but then my grammar will be filled with
>> "white-space | nothing" rules and I thought Lemon could reduce when
>> there is no other way out of the current stack as it is more elegant.
>> I went into the sources and saw this comment:
>>
>>     
LA(LR) is the answer - just drop Your's tokens as their arrive and give
a chance for the parser to be LALR, not LR or SLR :)

mak


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to