[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163853#comment-17163853
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on pull request #797:
URL: https://github.com/apache/parquet-mr/pull/797#issuecomment-663164259


   Synced with Gabor that we usually don't release on the lower version. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163852#comment-17163852
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli closed pull request #797:
URL: https://github.com/apache/parquet-mr/pull/797


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135432#comment-17135432
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

dbtsai commented on pull request #797:
URL: https://github.com/apache/parquet-mr/pull/797#issuecomment-643920535


   LGTM. This will help Spark community to adopt zstd easier. Thanks for the 
great work!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135336#comment-17135336
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli opened a new pull request #797:
URL: https://github.com/apache/parquet-mr/pull/797


   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [ ] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-XXX
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17126633#comment-17126633
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

luben commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-639394974


   @shangxinli : I haven't benchmarked



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17126361#comment-17126361
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-639245331


   > @shangxinli do we have benchmark comparing to native hadoop codec both in 
size and speed? Thanks.
   
   Hi @dbtsai, I didn't because I don't have Hadoop host installed with ZSTD. 
@luben, did you ever compare it with Hadoop ZSTD? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124682#comment-17124682
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky merged pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124001#comment-17124001
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r434007162



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);
+RunningJob mapRedJob = runMapReduceJob(CompressionCodecName.ZSTD, jobConf, 
conf);
+assert(mapRedJob.isSuccessful());
+  }
+
+  private RunningJob runMapReduceJob(CompressionCodecName codec, JobConf 
jobConf, Configuration conf) throws IOException, ClassNotFoundException, 
InterruptedException {

Review 

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17123990#comment-17123990
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r434002793



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,164 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {

Review comment:
   Added. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: 

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17123983#comment-17123983
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r434001031



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,164 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+long fileSizeLowLevel = runMrWithConf(1);
+// Clear the cache so that a new codec can be created with new 
configuration
+CodecFactory.CODEC_BY_NAME.clear();
+long fileSizeHighLevel = runMrWithConf(22);
+assert (fileSizeLowLevel > fileSizeHighLevel);

Review comment:
   Sounds good!

##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,164 @@
+/* 
+ * Licensed to the Apache Software Foundation (A

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17123531#comment-17123531
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r433723783



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,164 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+long fileSizeLowLevel = runMrWithConf(1);
+// Clear the cache so that a new codec can be created with new 
configuration
+CodecFactory.CODEC_BY_NAME.clear();
+long fileSizeHighLevel = runMrWithConf(22);
+assert (fileSizeLowLevel > fileSizeHighLevel);

Review comment:
   Please use the JUnit framework `assert` functions instead of the 
`assert` keyword. The [`assert` 
keyword](https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html)
 is not for u

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17123439#comment-17123439
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r433675252



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);
+RunningJob mapRedJob = runMapReduceJob(CompressionCodecName.ZSTD, jobConf, 
conf);
+assert(mapRedJob.isSuccessful());
+  }
+
+  private RunningJob runMapReduceJob(CompressionCodecName codec, JobConf 
jobConf, Configuration conf) throws IOException, ClassNotFoundException, 
InterruptedException {

Review

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-01 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17121423#comment-17121423
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

dbtsai commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-637195534


   @shangxinli do we have benchmark comparing to native hadoop codec both in 
size and speed? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-06-01 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17121421#comment-17121421
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

dbtsai commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-637193519


   +1 @shangxinli and thank you for this contribution. 
   
   This will allow users who are on order versions of hadoop that don't support 
native ZSTD to use ZSTD compression in Parquet, and also, users don't have to 
go through the very complicated hadoop native installation. For developers, we 
will be able to easily test this out in different local envs.  
   
   cc @rdblue 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120011#comment-17120011
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

dongjoon-hyun commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-636231687


   Thank you, @shangxinli and all!
   cc @dbtsai



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119907#comment-17119907
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432709258



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);
+RunningJob mapRedJob = runMapReduceJob(CompressionCodecName.ZSTD, jobConf, 
conf);
+assert(mapRedJob.isSuccessful());
+  }
+
+  private RunningJob runMapReduceJob(CompressionCodecName codec, JobConf 
jobConf, Configuration conf) throws IOException, ClassNotFoundException, 
InterruptedException {

Review 

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119842#comment-17119842
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432667013



##
File path: parquet-hadoop/README.md
##
@@ -324,9 +324,20 @@ ParquetInputFormat to materialize records. It should be a 
the descendant class o
 **Property:** `parquet.read.schema`  
 **Description:** The read projection schema.
 
-
 ## Class: UnmaterializableRecordCounter
 
 **Property:** `parquet.read.bad.record.threshold`  
 **Description:** The percentage of bad records to tolerate.  
 **Default value:** `0`
+
+## Class: ZstandardCodec
+
+**Property:** `parquet.compression.codec.zstd.level`
+**Description:** The compression level of ZSTD. The valid range is 1~22. 
Generally the higher compression level, the higher compression ratio can be 
achieved, but the writing time will be longer. 

Review comment:
   I see. I added double space in the end of Property, Description 
sentences. I checked it in Intellij and it works. I also see the above sections 
used the trailing double space. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119459#comment-17119459
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432381728



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);
+RunningJob mapRedJob = runMapReduceJob(CompressionCodecName.ZSTD, jobConf, 
conf);
+assert(mapRedJob.isSuccessful());
+  }
+
+  private RunningJob runMapReduceJob(CompressionCodecName codec, JobConf 
jobConf, Configuration conf) throws IOException, ClassNotFoundException, 
InterruptedException {

Review

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119449#comment-17119449
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432376409



##
File path: parquet-hadoop/README.md
##
@@ -324,9 +324,20 @@ ParquetInputFormat to materialize records. It should be a 
the descendant class o
 **Property:** `parquet.read.schema`  
 **Description:** The read projection schema.
 
-
 ## Class: UnmaterializableRecordCounter
 
 **Property:** `parquet.read.bad.record.threshold`  
 **Description:** The percentage of bad records to tolerate.  
 **Default value:** `0`
+
+## Class: ZstandardCodec
+
+**Property:** `parquet.compression.codec.zstd.level`
+**Description:** The compression level of ZSTD. The valid range is 1~22. 
Generally the higher compression level, the higher compression ratio can be 
achieved, but the writing time will be longer. 

Review comment:
   I was trying to say that _Property_, _Description_ and _Default value_ 
should be separate paragraphs. Currently they are not, the _Description_ is 
rendered just after _Property_ in the same line even if in the markdown they 
are separated. You should use one of the techniques described under the link to 
force the new line there.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119100#comment-17119100
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432142474



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);
+RunningJob mapRedJob = runMapReduceJob(CompressionCodecName.ZSTD, jobConf, 
conf);
+assert(mapRedJob.isSuccessful());
+  }
+
+  private RunningJob runMapReduceJob(CompressionCodecName codec, JobConf 
jobConf, Configuration conf) throws IOException, ClassNotFoundException, 
InterruptedException {

Review 

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119075#comment-17119075
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432118673



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstandardCodec.java
##
@@ -0,0 +1,146 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.codec.ZstandardCodec;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.hadoop.mapred.DeprecatedParquetOutputFormat;
+import org.apache.parquet.hadoop.metadata.CompressionCodecName;
+import org.apache.parquet.schema.MessageTypeParser;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Random;
+
+public class TestZstandardCodec {
+
+  private final Path inputPath = new 
Path("src/test/java/org/apache/parquet/hadoop/example/TestInputOutputFormat.java");
+
+  @Test
+  public void testZstdCodec() throws IOException {
+ZstandardCodec codec = new ZstandardCodec();
+Configuration conf = new Configuration();
+int[] levels = {1, 4, 7, 10, 13, 16, 19, 22};
+int[] dataSizes = {0, 1, 10, 1024, 1024 * 1024};
+
+for (int i = 0; i < levels.length; i++) {
+  conf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, levels[i]);
+  codec.setConf(conf);
+  for (int j = 0; j < dataSizes.length; j++) {
+testZstd(codec, dataSizes[j]);
+  }
+}
+  }
+
+  private void testZstd(ZstandardCodec codec, int dataSize) throws IOException 
{
+byte[] data = new byte[dataSize];
+(new Random()).nextBytes(data);
+BytesInput compressedData = compress(codec,  BytesInput.from(data));
+BytesInput decompressedData = decompress(codec, compressedData, 
data.length);
+Assert.assertArrayEquals(data, decompressedData.toByteArray());
+  }
+
+  private BytesInput compress(ZstandardCodec codec, BytesInput bytes) throws 
IOException {
+ByteArrayOutputStream compressedOutBuffer = new 
ByteArrayOutputStream((int)bytes.size());
+CompressionOutputStream cos = 
codec.createOutputStream(compressedOutBuffer, null);
+bytes.writeAllTo(cos);
+cos.close();
+return BytesInput.from(compressedOutBuffer);
+  }
+
+  private BytesInput decompress(ZstandardCodec codec, BytesInput bytes, int 
uncompressedSize) throws IOException {
+BytesInput decompressed;
+InputStream is = codec.createInputStream(bytes.toInputStream(), null);
+decompressed = BytesInput.from(is, uncompressedSize);
+is.close();
+return decompressed;
+  }
+
+  @Test
+  public void testZstdConfWithMr() throws Exception {
+JobConf jobConf = new JobConf();
+Configuration conf = new Configuration();
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 18);
+jobConf.setInt(ZstandardCodec.PARQUET_COMPRESS_ZSTD_LEVEL, 4);

Review comment:
   Good catch 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119074#comment-17119074
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r432118486



##
File path: parquet-hadoop/README.md
##
@@ -324,9 +324,20 @@ ParquetInputFormat to materialize records. It should be a 
the descendant class o
 **Property:** `parquet.read.schema`  
 **Description:** The read projection schema.
 
-
 ## Class: UnmaterializableRecordCounter
 
 **Property:** `parquet.read.bad.record.threshold`  
 **Description:** The percentage of bad records to tolerate.  
 **Default value:** `0`
+
+## Class: ZstandardCodec
+
+**Property:** `parquet.compression.codec.zstd.level`
+**Description:** The compression level of ZSTD. The valid range is 1~22. 
Generally the higher compression level, the higher compression ratio can be 
achieved, but the writing time will be longer. 

Review comment:
   Thanks for pointing out! I think it is OK to keep them in same line 
unless you have strong opinion we should have a separate line. I looked at the 
existing lines above and I see some of them are longer than mines. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118158#comment-17118158
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-634988634


   @gszadovszky Do you have time for another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117370#comment-17117370
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli edited a comment on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-634104930


   @luben, Do you have time to review the code? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117352#comment-17117352
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

luben commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-634146458


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117298#comment-17117298
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#issuecomment-634104930


   @karavelov, Do you have time to review the code? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17114874#comment-17114874
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r429560767



##
File path: 
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/codec/ZstdCodec.java
##
@@ -0,0 +1,112 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop.codec;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionInputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+/**
+ * ZSTD compression codec for Parquet.  We do not use the default hadoop
+ * one because it requires 1) to set up hadoop on local develop machine;
+ * 2) to upgrade hadoop to the newer version to have ZSTD support which is
+ * more cumbersome than upgrading parquet version.
+ *
+ * This implementation relies on ZSTD JNI(https://github.com/luben/zstd-jni)
+ * which is already a dependency for Parquet. ZSTD JNI ZstdOutputStream and
+ * ZstdInputStream use Zstd internally. So no need to create compressor and
+ * decompressor in ZstdCodec.
+ */
+public class ZstdCodec implements Configurable, CompressionCodec {
+
+  public final static String PARQUET_COMPRESS_ZSTD_LEVEL = 
"parquet.compression.codec.zstd.level";
+  public final static int DEFAULT_PARQUET_COMPRESS_ZSTD_LEVEL = 3;
+  public final static String PARQUET_COMPRESS_ZSTD_WORKERS = 
"parquet.compression.codec.zstd.workers";
+  public final static int DEFAULTPARQUET_COMPRESS_ZSTD_WORKERS = 0;

Review comment:
   Sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17114875#comment-17114875
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r429560783



##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstdCodec.java
##
@@ -0,0 +1,74 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.hadoop.codec.ZstdCodec;
+import org.junit.Assert;
+import org.junit.Test;  
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Random;
+
+public class TestZstdCodec {

Review comment:
   Added test for conf setting with MR





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
> According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use ZSTD-JNI 
> for ZSTD.
> Another approach is to use https://github.com/airlift/aircompressor which is 
> a pure Java implementation. But it seems the compression level is not 
> adjustable in aircompressor. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113905#comment-17113905
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

gszadovszky commented on a change in pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793#discussion_r429112028



##
File path: 
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/codec/ZstdCodec.java
##
@@ -0,0 +1,112 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop.codec;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionInputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+/**
+ * ZSTD compression codec for Parquet.  We do not use the default hadoop
+ * one because it requires 1) to set up hadoop on local develop machine;

Review comment:
   ```suggestion
* one because it requires 1) to set up hadoop on local development machine;
   ```

##
File path: 
parquet-hadoop/src/test/java/org/apache/parquet/hadoop/TestZstdCodec.java
##
@@ -0,0 +1,74 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.parquet.bytes.BytesInput;
+import org.apache.parquet.hadoop.codec.ZstdCodec;
+import org.junit.Assert;
+import org.junit.Test;  
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Random;
+
+public class TestZstdCodec {

Review comment:
   I tried to find the code part where we set the hadoop conf to the codec 
but could not find it. Please, write a high level test where you set 
compression level and workers in the hadoop conf and executes a file write via 
e.g. an MR job.

##
File path: 
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/codec/ZstdCodec.java
##
@@ -0,0 +1,112 @@
+/* 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.hadoop.codec;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.Co

[jira] [Commented] (PARQUET-1866) Replace Hadoop ZSTD with JNI-ZSTD

2020-05-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113538#comment-17113538
 ] 

ASF GitHub Bot commented on PARQUET-1866:
-

shangxinli opened a new pull request #793:
URL: https://github.com/apache/parquet-mr/pull/793


   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [ ] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-XXX
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace Hadoop ZSTD with JNI-ZSTD
> -
>
> Key: PARQUET-1866
> URL: https://issues.apache.org/jira/browse/PARQUET-1866
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Xinli Shang
>Assignee: Xinli Shang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parquet-mr repo has been using 
> [ZSTD-JNI|https://github.com/luben/zstd-jni/tree/master/src/main/java/com/github/luben/zstd]
>  for the parquet-cli project. It is a cleaner approach to use this JNI than 
> using Hadoop ZSTD compression, because 1) on the developing box, installing 
> Hadoop is cumbersome, 2) Older version of Hadoop doesn't support ZSTD. 
> Upgrading Hadoop is another pain. This Jira is to replace Hadoop ZSTD with 
> ZSTD-JNI for parquet-hadoop project. 
>  According to the author of ZSTD-JNI, Flink, Spark, Cassandra all use 
> ZSTD-JNI for ZSTD.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)