sunchao commented on a change in pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#discussion_r502564166



##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -76,6 +64,19 @@ public Lz4Compressor(int directBufferSize, boolean useLz4HC) 
{
     this.useLz4HC = useLz4HC;
     this.directBufferSize = directBufferSize;
 
+    try {
+      LZ4Factory lz4Factory = LZ4Factory.fastestInstance();
+      if (useLz4HC) {
+        lz4Compressor = lz4Factory.highCompressor();

Review comment:
       The library also allow configuring the compression level, which perhaps 
we can add a Hadoop option to enable that later. This just use the default 
compression level.

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -76,6 +64,19 @@ public Lz4Compressor(int directBufferSize, boolean useLz4HC) 
{
     this.useLz4HC = useLz4HC;
     this.directBufferSize = directBufferSize;
 
+    try {
+      LZ4Factory lz4Factory = LZ4Factory.fastestInstance();
+      if (useLz4HC) {

Review comment:
       nit: seems we no longer need the field `useLz4HC` with this.

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
     // do nothing
   }
 
-  private native static void initIDs();
-
-  private native int decompressBytesDirect();
+  private int decompressDirectBuf() {
+    if (compressedDirectBufLen == 0) {

Review comment:
       I don't think this will ever happen but it's not a big deal.

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
     // do nothing
   }
 
-  private native static void initIDs();
-
-  private native int decompressBytesDirect();
+  private int decompressDirectBuf() {
+    if (compressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `compressedDirectBuf` for reading
+      compressedDirectBuf.limit(compressedDirectBufLen).position(0);
+      lz4Decompressor.decompress((ByteBuffer) compressedDirectBuf,
+              (ByteBuffer) uncompressedDirectBuf);
+      compressedDirectBufLen = 0;
+      compressedDirectBuf.limit(compressedDirectBuf.capacity()).position(0);

Review comment:
       you can just call `clear`?

##########
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
##########
@@ -71,6 +71,11 @@
       <artifactId>assertj-core</artifactId>
       <scope>test</scope>
     </dependency>
+    <dependency>

Review comment:
       why is this needed?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -302,11 +303,20 @@ public synchronized long getBytesWritten() {
   public synchronized void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  private native int compressBytesDirectHC();
-
-  public native static String getLibraryName();
+  private int compressDirectBuf() {

Review comment:
       seems some of the methods in this class look exactly the same as in 
`SnappyCompressor` - perhaps we can do some refactoring later.

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -302,11 +303,20 @@ public synchronized long getBytesWritten() {
   public synchronized void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  private native int compressBytesDirectHC();
-
-  public native static String getLibraryName();
+  private int compressDirectBuf() {
+    if (uncompressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `uncompressedDirectBuf` for reading
+      uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+      compressedDirectBuf.clear();

Review comment:
       I think this isn't necessary since it's called right before the call 
site?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
     // do nothing
   }
 
-  private native static void initIDs();
-
-  private native int decompressBytesDirect();
+  private int decompressDirectBuf() {
+    if (compressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `compressedDirectBuf` for reading

Review comment:
       nit: this comment doesn't add much value - it just state what is exactly 
being done in the code.

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java
##########
@@ -330,4 +328,33 @@ public void doWork() throws Exception {
 
     ctx.waitFor(60000);
   }
+
+  @Test
+  public void testLz4Compatibility() throws Exception {
+    Path filePath = new Path(TestLz4CompressorDecompressor.class

Review comment:
       nit: perhaps some comments on this - not quite sure what it is testing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to