In fact, the time you'd spend writing read instances would not compare to the half hour required to learn parsec. And your parser will be efficient (at least, according to the guys from the parser team ;-)

I agree that Read is likely to be inefficient, but the more important aspect is that it gives you no useful error message if the parse fails.

Parser combinators are really rather easy to learn and use, and tend to give decent error reports when something goes wrong. In fact, if you just want Read-like functionality for a set of Haskell datatypes, use polyparse: the DrIFT tool can derive polyparse's Text.Parse class (the equivalent of Read) for you, so you do not even need to write the parser yourself!

I would caution against using Parsec if your dataset is large. Parsec does not return anything until it has seen the entire input, so can use a huge amount of memory. The other day someone was observing on haskell-cafe that parsing a 9Mb XML file using a Parsec-based parser required >7Gb of memory, compared with 1.3Gb for a strict polyparse- based parser (still too much), and the happy conclusion was that the lazy polyparse variant uses a neglible amount by comparison.

(Declaration of interest: I wrote polyparse.)

Regards,
    Malcolm

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to