[ 
https://issues.apache.org/jira/browse/AVRO-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13540163#comment-13540163
 ] 

Judy Hay commented on AVRO-1081:
--------------------------------

Hi,

I'm not sure where to send this so please let me know if I should send it 
somewhere else!

Maybe this is just the case in 1.7.3, but it seems that ByteBufferTest is not 
actually run by Maven (because it's not named TestByteBuffer, so it doesn't 
match the test rules), and I am getting failures on its test() method if I 
force it to run. Specifically, this is happening:

The test's X class has two fields, X.name (a String) and X.content (a 
ByteBuffer). It seems they are expected to maintain that order by the reader 
and writer.
The writer first writes X.name as a String to its local buffer.
The writer calls BufferedBinaryEncoder.writeFixed(ByteBuffer) to write 
X.content. If the size of the content exceeds 2048 bytes (bulkLimit), 
writeFixed() bypasses the local buffer and writes directly to the underlying 
sink (which was previously empty).
The X.name String is then flushed to the sink after X.content, thus reversing 
the order of the fields. 
The reader fails because it is expecting the fields in the original order (as 
the schema and class both specify). 

Also, if the size of X.content is less than 2048 bytes, the contents are in 
expected order and the test passes.

The actual reported error message is:
Tests in error: 
  test(org.apache.avro.reflect.TestByteBuffer): java.io.IOException: Block read 
partially, the data may be corrupt

Anyway, I don't know how important this functionality is, but in any case, it 
doesn't appear to be tested in the current Maven setup. Just wanted to let you 
know!

Regards,

Judy Hay



                
> GenericDatumWriter does not support native ByteBuffers
> ------------------------------------------------------
>
>                 Key: AVRO-1081
>                 URL: https://issues.apache.org/jira/browse/AVRO-1081
>             Project: Avro
>          Issue Type: Bug
>    Affects Versions: 1.6.3
>            Reporter: Robert Fuller
>            Assignee: Doug Cutting
>             Fix For: 1.7.0
>
>         Attachments: AVRO-1081.patch, ByteBufferTest.java, 
> ByteBufferTest.java, patch.diff.txt, patch.diff.txt
>
>
> An exception is thrown when trying to encode bytes backed by a file.
> java.lang.UnsupportedOperationException: null
>       at java.nio.ByteBuffer.arrayOffset(ByteBuffer.java:968) ~[na:1.6.0_31]
>       at org.apache.avro.io.BinaryEncoder.writeBytes(BinaryEncoder.java:61) 
> ~[avro-1.6.3.jar:1.6.3]
> Note arrayOffset is an optional method, see:
> http://docs.oracle.com/javase/6/docs/api/java/nio/ByteBuffer.html#arrayOffset%28%29
> FileChannel returns native ByteBuffer not HeapedByteBuffer
> See here:
> http://mail-archives.apache.org/mod_mbox/avro-user/201202.mbox/%3ccb57f421.6bfe2%25sc...@richrelevance.com%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to