Evan,

I had this problem as well, parsing a large JSON document that arrives in 
chunks via
Netty as HttpObjects. I just needed to do some santization on keys so 
reading it all into
memory would have been overhead I'd rather skip.

I was able to use Jackson's NonBlockingJsonParser[1] to do this. I feed it 
byte arrays as they
arrive, and then iterate through the parsed document until I hit 
NOT_AVAILABLE. This way
you can continue feeding it data until you've either parsed it all or hit 
an error.

If you're still working on this problem I should be able to extract some 
code from the project I
have and share it.

Thanks,
Ómar

[1] 
http://fasterxml.github.io/jackson-core/javadoc/2.9/com/fasterxml/jackson/core/json/async/NonBlockingJsonParser.html

On Tuesday, February 2, 2016 at 7:32:44 PM UTC-8, Evan Maczura wrote:
>
> This is what I figured. I will table this project for now and revisit it 
> when additional time is available.
> Thank you!
>
> On Friday, January 29, 2016 at 10:38:50 PM UTC-6, Tatu Saloranta wrote:
>>
>> Current Jackson streaming parser abstractions are designed for blocking 
>> input, so it may be difficult to use non-blocking style, assuming you get 
>> callbacks for content. If it was just a matter of reading chunks of bytes, 
>> I don't see why it could not be abstracted behind `InputStream`.
>>
>> If non-blocking parsing was implemented (similar to how I implemented 
>> Aalto-xml for XML) things would be simpler, as it would be possible to 
>> "push" content, iterate over all complete tokens, and stop when 
>> TOKEN_INCOMPLETE was returned (which indicates that remaining content is 
>> not enough to finish a token).
>>
>> So, at this point, it may well be that aggregator is the simplest way, 
>> unless I misunderstood how http content is accessed via Netty.
>>
>> -+ Tatu +-
>>
>>
>> On Fri, Jan 29, 2016 at 4:55 PM, Evan Maczura <[email protected]> wrote:
>>
>>> I would like to use Jackson's streaming API to develop a streaming json 
>>> parser for parsing multiple HttpContent objects received from Netty. 
>>> Currently, one must use HttpObjectAggregator and buffer all of the data 
>>> and then parse it.. I feel like this can definitely be improved upon for 
>>> faster and lower garbage handlers.
>>>
>>> Looking at the API. I cant get a clear picture on how to do this 
>>> properly.
>>> What it looks like I would need is to use reflection to build setters 
>>> for every field, and its children objects as well. As the byte arrays come 
>>> in, use a JsonParser into from the setter field names into a token buffer 
>>> and append the token buffers until the full message is received. Then 
>>> readValueAs() for the data binding. This would also require elegant 
>>> handling of half-received tokens in the buffers and all that good stuff.
>>>
>>> Is my understanding correct? Is there not a better way to do this?
>>>
>>> All input is appreciated.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "jackson-user" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"jackson-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to