[
https://issues.apache.org/jira/browse/AVRO-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881237#action_12881237
]
Doug Cutting commented on AVRO-183:
-----------------------------------
One could implement this as a BinaryEncoder, and performance would be better.
The downside is that not all BinaryEncoders generate the same size. In
particular, BlockingBinaryEncoder may break arrays and maps into blocks and
insert block sizes (by using negative counts), while all other BinaryEncoders
currently will generate the same size for an object. So an OutputStream-based
implementation may be required for correctness in some cases.
We really need a use-case, and I can't recall my motivation when I filed this
issue. Next time I should be sure to include it in the description! Both
implementations may be useful to have, but since BlockingBinaryEncoder is not
often used, perhaps an efficient BinaryEncoder implementation would be most
useful. It should clearly document that the size it returns is not correct
when BlockingBinaryEncoder is used.
> add DatumWriter#sizeOf method that computes the number of bytes an object
> will be serialized as
> -----------------------------------------------------------------------------------------------
>
> Key: AVRO-183
> URL: https://issues.apache.org/jira/browse/AVRO-183
> Project: Avro
> Issue Type: New Feature
> Components: java
> Reporter: Doug Cutting
> Assignee: John Yu
>
> Sometimes it is useful to know how large an object will be when serialized
> before it is in fact serialized.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.