I found what looks like a bug in the lexer. When there is an illegal character, the value of the token that is passed to t_error() contains everything from the invalid character to the end of the input. The input isn't actually consumed though, if I call t.lexer.skip(1) the rest of the input is processed as expected. So it would seem that only the value of the token passed to t_error() is wrong.
I have a trimmed down version of the grammar that exhibits this problem plus example inputs. What's the best way to post it? Pedro --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "ply-hack" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/ply-hack?hl=en -~----------~----~----~----~------~----~------~--~---
