On 03/06/14 09:15, Johannes Pfau wrote:

I'd probably prefer a tokenizer/lexer as the lowest layer, then SAX and
DOM implemented using the tokenizer. This way we can provide a kind of
input range. I actually used Brian Schotts std.lexer proposal to build a
simple JSON tokenizer/lexer and it worked quite well. But I don't
think std.lexer is zero-allocation yet so that's an important drawback.

If I recall correctly it will allocate strings instead of slicing the input. The strings are then reused. If the input is sliced the whole input is retained in memory even if not all of the input is used.

--
/Jacob Carlborg

Reply via email to