Am 09.04.2015 um 14:25 schrieb Martin Nowak:
On 04/08/2015 03:56 PM, Sönke Ludwig wrote:


The problem is that even the pull parser alone is relatively slow. Also,
for some reason the linker reports unresolved symbols as soon as I build
without the -debug flag...

The review hasn't yet started and I'm already against the "stream"
parser, because it hardly deserves the names parser, it's more like a lexer.

Because the benchmark code by tcha was a very specific hack for the used
data structure, I tried to write a proper stream parser to have a fair
comparison. This is where I stopped (it doesn't work).

http://dpaste.dzfl.pl/8282d70a1254

The biggest problem with that interface is, that you have to count
matching start/end markers for objects and arrays in order to skip an
entry, not much fun and definitely needs a dedicated skip value function.

There are 2 very nice alternative approaches in the benchmark repo.

https://github.com/kostya/benchmarks/blob/master/json/test_pull.cr
https://github.com/kostya/benchmarks/blob/master/json/test_schema.cr


That would be a nice intermediate level parser. However, the range based parser as it is now is also useful for things like automated deserialization (which I don't want to include at this point) where you don't know in which order fields come in and where you have to use that style of filtering through the data anyway.

But if inlining works properly, it should be no problem to implement other APIs on top of it.

Reply via email to