[ 
https://issues.apache.org/jira/browse/AVRO-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684316#comment-16684316
 ] 

ASF GitHub Bot commented on AVRO-2262:
--------------------------------------

Fokko closed pull request #376: AVRO-2262: add unit test to test codec behavior 
on sliced buffers
URL: https://github.com/apache/avro/pull/376
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/lang/java/avro/src/test/java/org/apache/avro/file/TestAllCodecs.java 
b/lang/java/avro/src/test/java/org/apache/avro/file/TestAllCodecs.java
index 0e531b7a5..c21c56891 100644
--- a/lang/java/avro/src/test/java/org/apache/avro/file/TestAllCodecs.java
+++ b/lang/java/avro/src/test/java/org/apache/avro/file/TestAllCodecs.java
@@ -84,6 +84,39 @@ public void testCodec() throws IOException {
     Assert.assertEquals(decompressedBuffer, inputByteBuffer);
   }
 
+  @Test
+  public void testCodecSlice() throws IOException {
+    int inputSize = 500_000;
+    byte[] input = generateTestData(inputSize);
+
+    Codec codecInstance = CodecFactory.fromString(codec).createInstance();
+
+    ByteBuffer partialBuffer = ByteBuffer.wrap(input);
+    partialBuffer.position(17);
+
+    ByteBuffer inputByteBuffer = partialBuffer.slice();
+    ByteBuffer compressedBuffer = codecInstance.compress(inputByteBuffer);
+
+    int compressedSize = compressedBuffer.remaining();
+
+    // Make sure something returned
+    assertTrue(compressedSize > 0);
+
+    // Create a slice from the compressed buffer
+    ByteBuffer sliceBuffer = ByteBuffer.allocate(compressedSize + 100);
+    sliceBuffer.position(50);
+    sliceBuffer.put(compressedBuffer);
+    sliceBuffer.limit(compressedSize + 50);
+    sliceBuffer.position(50);
+
+    // Decompress the data
+    ByteBuffer decompressedBuffer = 
codecInstance.decompress(sliceBuffer.slice());
+
+    // Validate the the input and output are equal.
+    inputByteBuffer.rewind();
+    Assert.assertEquals(decompressedBuffer, inputByteBuffer);
+  }
+
   // Generate some test data that will compress easily
   public static byte[] generateTestData(int inputSize) {
     byte[] arr = new byte[inputSize];


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Java compression codec improvements
> -----------------------------------
>
>                 Key: AVRO-2262
>                 URL: https://issues.apache.org/jira/browse/AVRO-2262
>             Project: Apache Avro
>          Issue Type: Task
>          Components: java
>    Affects Versions: 1.8.2
>            Reporter: Fokko Driesprong
>            Assignee: Jacob Tolar
>            Priority: Major
>             Fix For: 1.9.0
>
>
> * Update a few things to use try-with-resources
> * Updated CodecFactory to reference constants for codec names
> * Fixed a small bug in Snappy and BZip2: compression/decompression were 
> incorrect if the input ByteBuffer was a a slice(). I don't see anywhere that 
> this would actually happen currently, but some codecs were written to account 
> for this correctly; now they're all correct. Updated everything to compute 
> the correct offset into the underlying array. (I can add a test for this in 
> TestAllCodecs once #351 is merged).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to