- Is this a valid approach?

It is possible that your Parsec lexer will need to see the entire input before it delivers any tokens at all to the Happy parser. This might cause a space problem, depending on how large your inputs are likely to be.

- What is your workflow on parsing complex data structures?

I usually write a lexer by hand, as a little state machine delivering a lazy list of tokens, then pass the tokens as the input to a grammar built from parser combinators. For larger inputs, I use lazy parser combinators to avoid space leaks.

- What about performance? Since my project is going to be an interpreted language parsing performance might be interesting aswell. I've read that happy is in general faster than parsec, but what if I combine both of them as I said above? I guess that parsing a simple list of tokens without any nested parser structures would be pretty fast?

Parser combinators are rarely a performance bottleneck in my experience. However, relying on parser combinators to do lexing often slows things down too much (too much back-tracking).

Regards,
    Malcolm
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to