[ 
https://issues.apache.org/jira/browse/AVRO-406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12831088#action_12831088
 ] 

Doug Cutting commented on AVRO-406:
-----------------------------------

> You should be able to have a meaningful "sendMeFiveHundredMegabytes()" 
> command that can be used by a client with only a modicum of clever handling.

A way of doing this without changing anything in the spec is to transmit an 
array of chunks, since arrays are already blocked and may be arbitrarily long.  
Todd's API above makes the array of chunks explicit, so perhaps we're okay with 
that approach?

BlockingBinaryEncoder intelligently breaks arrays when they're too big for a 
block, but not otherwise, minimizing block overhead.  Applications can stream 
writes and reads of arbitrarily-large complex objects using this class.  One 
option is thus to code directly to the Encoder/Decoder API and rely on 
ValidatingEncoder and ValidatingDecoder to ensure that calls conform to a 
schema: Encoder/Decoder is Avro's streaming API.

We don't yet have a higher-level API that permits streaming arbitrarily large 
items.  The Iterable<Chunk> approach Todd proposes should work in some cases.  
To my thinking it is only applicable when the final parameter of a method is an 
array type and/or when the return type is an array type.  Does that sound 
right?  One could annotate the schema somehow, so that the compiler generates 
this alternate API, or perhaps the compiler could simply be modified to always 
generate both styles of methods when these conditions are met.

The Transceiver interface would also need to change from using List<ByteBuffer> 
to Iterator<ByteBuffer>, so that it can return a response before it has been 
entirely read and accept a request before it has been entirely written.


> Support streaming RPC calls
> ---------------------------
>
>                 Key: AVRO-406
>                 URL: https://issues.apache.org/jira/browse/AVRO-406
>             Project: Avro
>          Issue Type: New Feature
>          Components: java, spec
>            Reporter: Todd Lipcon
>
> Avro nicely supports chunking of container types into multiple frames. We 
> need to expose this to RPC layer to facilitate use cases like the Hadoop 
> Datanode where a single "RPC" can yield far more data than should be buffered 
> in memory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to