[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032831#comment-17032831
 ] 

John Zhuge commented on HADOOP-12990:
-

Yes indeed. Actually 3 apps (lz4 tool, Hadoop, and Spark) claiming the `.lz4` 
extension.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268366#comment-15268366
 ] 

John Zhuge edited comment on HADOOP-12990 at 2/8/20 6:36 AM:
-

[~cavanaug], lz4 command line and hadoop-lz4 use the same lz4 codec library. 
The difference is only the framing, see my comment and hack on 4/3.

Questions for your use case:
 * Do your JSON files contain a single JSON object or many JSON records?
 * After ingesting into HDFS, how do you plan to use the data?
 * Have you considered these splittable container file formats with 
compression: SequenceFile, RCFile, ORC, Avro, Parquet? In the container, they 
can choose any Hadoop codec, including LZ4.


was (Author: jzhuge):
[~cavanaug], lz4 command line and hadoop-lz4 use the same lz4 codec library. 
The difference is only the framing, see my comment and hack on 4/3.

Questions for your use case:
* Do your JSON files contain a single JSON object or many JSON records?
* After ingesting into HDFS, how do you plan to use the data?
* Have considered these splittable container file formats with compression: 
SequenceFile, RCFile, ORC, Avro, Parquet? In the container, they can choose any 
Hadoop codec, including LZ4.


> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032199#comment-17032199
 ] 

John Zhuge edited comment on HADOOP-12990 at 2/8/20 6:31 AM:
-

OOM usually indicates format mismatch, e.g., reading a large block size, then 
trying to allocate memory.

After looking into Spark code, I realized I was wrong about using Hadoop codec. 
Spark uses its own LZ4 codec based on 
[lz4-java|https://github.com/lz4/lz4-java]. Check out [LZ4CompressionCodec in 
2.3.4|https://github.com/apache/spark/blob/v2.3.4/core/src/main/scala/org/apache/spark/io/CompressionCodec.scala#L113-L124].

Its javadoc points out:
{quote} * @note The wire protocol for this codec is not guaranteed to be 
compatible across versions
 * of Spark. This is intended for use as an internal compression utility within 
a single Spark
 * application.{quote}
Not sure whether lz4-java LZ4BlockOutputStream output can be read by Linux lz4 
tool.

Your best bet may be writing a Java decompression application with a compatible 
version of lz4-java, e.g., 1.4.0 used by Spark 2.3.


was (Author: jzhuge):
OOM usually indicates format mismatch, e.g., reading a large block size, then 
trying to allocate memory.

After looking into Spark code, I realized I was wrong about using Hadoop codec. 
Spark uses its own LZ4 codec based on 
[lz4-java|https://github.com/lz4/lz4-java]. Check out [LZ4CompressionCodec in 
2.3.4|https://github.com/apache/spark/blob/v2.3.4/core/src/main/scala/org/apache/spark/io/CompressionCodec.scala#L113-L124].

Its javadoc points out:
{quote} * @note The wire protocol for this codec is not guaranteed to be 
compatible across versions
 * of Spark. This is intended for use as an internal compression utility within 
a single Spark
 * application.{quote}
Not sure whether lz4-java LZ4BlockOutputStream output can be read by Linux lz4 
tool.

Your best bet may be writing a Java decompression application with a compatible 
version of lz4-java, e.g., Spark 2.3 uses lz4-java 1.4.0.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 

[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread Redriver (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032814#comment-17032814
 ] 

Redriver commented on HADOOP-12990:
---

[~jzhuge] Thanks for your information. It is confusing to see two kinds of LZ4 
in Hadoop and Spark. Anyway, I need to write a decompressor for Spark LZ4.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032708#comment-17032708
 ] 

Hadoop QA commented on HADOOP-16847:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992925/HADOOP-16874-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux acbd284dae39 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 23787e4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16759/testReport/ |
| Max. process+thread count | 1361 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16759/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
>   

[GitHub] [hadoop] hadoop-yetus commented on issue #1835: HADOOP-16847.Test can fail if HashSet iterates in a different order

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1835: HADOOP-16847.Test can fail if HashSet 
iterates in a different order
URL: https://github.com/apache/hadoop/pull/1835#issuecomment-583642137
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 56s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 17s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 13s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 106m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1835 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d9d7814f0589 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 23787e4 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/2/testReport/ |
   | Max. process+thread count | 1208 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Description: 
The test `testNegativeGroupCaching` can fail if the iteration order of HashSet 
changes. In detail, the method `assertEquals` (line 331) compares 
`groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.

  was:
The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. In detail, the method `assertEquals` (line 331) compares 
`groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.


> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Attachment: HADOOP-16874-001.patch

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The method `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032039#comment-17032039
 ] 

testfixer0 edited comment on HADOOP-16847 at 2/7/20 8:32 PM:
-

The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. In detail, the method `assertEquals` (line 331) compares 
`groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.


was (Author: testfixer):
The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. So, this PR proposes to modify HashSet to LinkedHashSet for a 
deterministic order

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch
>
>
> The method `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Description: 
The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. In detail, the method `assertEquals` (line 331) compares 
`groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.

  was:
The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. In detail, the method `assertEquals` (line 331) compare 
`groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.


> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch
>
>
> The method `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Description: 
The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. In detail, the method `assertEquals` (line 331) compare 
`groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
However, the iteration is non-deterministic.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.

  was:
The method `testNegativeGroupCaching` in `TestGroupsCaching` can fail if the 
iteration order of HashSet changes. In detail, in line 331, the method 
`assertEquals` compare `groups.getGroups(user)` with a ArrayList `myGroups` . 
However,  `groups.getGroups(user)` is related with  `FakeGroupMapping`. The 
variables `allGroups` and `blackList` are defined as HashSet in 
`FakeGroupMapping`, and it can iterate in a different order.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.


> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch
>
>
> The method `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compare 
> `groups.getGroups(user)` with a ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1836: HADOOP-16646. Backport S3A enhancements and fixes from trunk to branch-3.2

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1836: HADOOP-16646. Backport S3A enhancements 
and fixes from trunk to branch-3.2
URL: https://github.com/apache/hadoop/pull/1836#issuecomment-583596821
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
9 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 56s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  17m 22s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  17m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 52s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 2 new 
+ 32 unchanged - 0 fixed = 34 total (was 32)  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  2s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   4m 45s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 127m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1836 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux ec3eb7e260b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-3.2 / aca9304 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/testReport/ |
   | Max. process+thread count | 1598 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-07 Thread GitBox
goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to 
the Router
URL: https://github.com/apache/hadoop/pull/1832#discussion_r376562517
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterFsckServlet.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.security.PrivilegedExceptionAction;
+import java.util.Map;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.common.JspHelper;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * This class is used in Namesystem's web server to do fsck on namenode.
+ */
+@InterfaceAudience.Private
+public class RouterFsckServlet extends HttpServlet {
+  /** for java.io.Serializable */
 
 Review comment:
   First sentence should end with a period. [JavadocStyle]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-07 Thread GitBox
goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to 
the Router
URL: https://github.com/apache/hadoop/pull/1832#discussion_r376562407
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterFsck.java
 ##
 @@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.net.URL;
+import java.net.URLConnection;
+import java.nio.charset.StandardCharsets;
+import java.util.Collections;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
+import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetNamenodeRegistrationsRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetNamenodeRegistrationsResponse;
+import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Time;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Wrapper for the Router to offer the Namenode FSCK.
+ */
+@InterfaceAudience.Private
+public class RouterFsck {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(RouterFsck.class.getName());
+
+  private final Router router;
+  private final InetAddress remoteAddress;
+  private final PrintWriter out;
+  private final Map pmap;
+
+  public RouterFsck(Router router, Map pmap,
+PrintWriter out, InetAddress remoteAddress) {
+this.router = router;
+this.remoteAddress = remoteAddress;
+this.out = out;
+this.pmap = pmap;
+  }
+
+  public void fsck() {
+final long startTime = Time.monotonicNow();
+try {
+  String msg = "Federated FSCK started by " +
+  UserGroupInformation.getCurrentUser() + " from " + remoteAddress +
+  " at " + new Date();
+  LOG.info(msg);
+  out.println(msg);
+
+  // Check each Namenode in the federation
+  StateStoreService stateStore = router.getStateStore();
+  MembershipStore membership =
+  stateStore.getRegisteredRecordStore(MembershipStore.class);
+  GetNamenodeRegistrationsRequest request =
+  GetNamenodeRegistrationsRequest.newInstance();
+  GetNamenodeRegistrationsResponse response =
+  membership.getNamenodeRegistrations(request);
+  List memberships = response.getNamenodeMemberships();
+  Collections.sort(memberships);
+  for (MembershipState nn : memberships) {
+if (nn.getState() == FederationNamenodeServiceState.ACTIVE) {
+  try {
+String webAddress = nn.getWebAddress();
+out.write("Checking " + nn + " at " + webAddress + "\n");
+remoteFsck(nn);
+  } catch (IOException ioe) {
+out.println("Cannot query " + nn + ": " + ioe.getMessage() + "\n");
+  }
+}
+  }
+
+  out.println("Federated FSCK ended at " + new Date() + " in "
+  + (Time.monotonicNow() - startTime + " milliseconds"));
+} catch (Exception e) {
+  String errMsg = "Fsck " + e.getMessage();
+  LOG.warn(errMsg, e);
+  out.println("Federated FSCK ended at " + new Date() + " in "
+  + (Time.monotonicNow() - startTime + " milliseconds"));
+  out.println(e.getMessage());
+  out.print("\n\n" + errMsg);
+} finally {
+  out.close();
+}
+  }
+
+  /**
+   * Perform FSCK in a remote Namenode.
+   *
+   * @param nn The state of the remote NameNode
+   * @throws IOException Failed to fsck in a remote NameNode
+   */
+  private void remoteFsck(MembershipState nn) throws IOException {
+final String scheme = 

[GitHub] [hadoop] goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-07 Thread GitBox
goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to 
the Router
URL: https://github.com/apache/hadoop/pull/1832#discussion_r376562727
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterFsckServlet.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.security.PrivilegedExceptionAction;
+import java.util.Map;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.common.JspHelper;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * This class is used in Namesystem's web server to do fsck on namenode.
+ */
+@InterfaceAudience.Private
+public class RouterFsckServlet extends HttpServlet {
+  /** for java.io.Serializable */
+  private static final long serialVersionUID = 1L;
+
+  public static final String SERVLET_NAME = "fsck";
+  public static final String PATH_SPEC = "/fsck";
+
+  /** Handle fsck request */
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response
+  ) throws IOException {
 
 Review comment:
   Bad indent.
   Check other checkstyles:
   
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-07 Thread GitBox
goiri commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router
URL: https://github.com/apache/hadoop/pull/1832#issuecomment-583562603
 
 
   Yetus does not look very happy but it seems unrelated.
   Please double check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1836: HADOOP-16646. Backport S3A enhancements and fixes from trunk to branch-3.2

2020-02-07 Thread GitBox
steveloughran opened a new pull request #1836: HADOOP-16646. Backport S3A 
enhancements and fixes from trunk to branch-3.2
URL: https://github.com/apache/hadoop/pull/1836
 
 
   This picks up ~all the changes in hadoop-trunk related to s3a and backports 
them to hadoop-3.2.x
   
   includes
   * minor changes externally which came with this (e.g. hadoop token code)
   
   excludes: 
   * JAR updates other than of the AWS SDK
   * anything which would force broader changes across the code
   
   troublespots
   * going to the older mockito is trouble, things compile but don't run. 
Luckily I've done enough backporting of the relevant tests to internal branches 
that I've done that work already
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Description: 
The method `testNegativeGroupCaching` in `TestGroupsCaching` can fail if the 
iteration order of HashSet changes. In detail, in line 331, the method 
`assertEquals` compare `groups.getGroups(user)` with a ArrayList `myGroups` . 
However,  `groups.getGroups(user)` is related with  `FakeGroupMapping`. The 
variables `allGroups` and `blackList` are defined as HashSet in 
`FakeGroupMapping`, and it can iterate in a different order.

This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.

  was:The method `testNegativeGroupCaching` can fail if the iteration order of 
HashSet changes. So, this PR proposes to modify HashSet to LinkedHashSet for a 
deterministic order


> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch
>
>
> The method `testNegativeGroupCaching` in `TestGroupsCaching` can fail if the 
> iteration order of HashSet changes. In detail, in line 331, the method 
> `assertEquals` compare `groups.getGroups(user)` with a ArrayList `myGroups` . 
> However,  `groups.getGroups(user)` is related with  `FakeGroupMapping`. The 
> variables `allGroups` and `blackList` are defined as HashSet in 
> `FakeGroupMapping`, and it can iterate in a different order.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-07 Thread GitBox
steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling 
exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-583482153
 
 
   experimental directory marker optimization feature removed. It was really 
broken and its very presence would only encourage people to turn it on, or at 
least, start demanding it was production ready within a short period of 
time..and be very disappointed when that couldn't happen


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated HADOOP-16847:

Priority: Minor  (was: Major)

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch
>
>
> The method `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. So, this PR proposes to modify HashSet to LinkedHashSet for 
> a deterministic order



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16834) Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java

2020-02-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16834:
---
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~risyomei] for your contribution!

> Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java
> --
>
> Key: HADOOP-16834
> URL: https://issues.apache.org/jira/browse/HADOOP-16834
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16834.001.patch, HADOOP-16834.002.patch
>
>
> com.sun.istack.Nullable is used in only DNS.java and 
> javax.annotation.Nullable is widely used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router
URL: https://github.com/apache/hadoop/pull/1832#issuecomment-583404297
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  27m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 39s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 21s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  the patch passed  |
   | -1 :x: |  compile  |   4m 23s |  hadoop-hdfs-project in the patch failed.  
|
   | -1 :x: |  javac  |   4m 23s |  hadoop-hdfs-project in the patch failed.  |
   | -0 :warning: |  checkstyle  |   1m 15s |  hadoop-hdfs-project: The patch 
generated 11 new + 1 unchanged - 0 fixed = 12 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  98m 49s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m 36s |  hadoop-hdfs-rbf in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 36s |  The patch generated 43 ASF License 
warnings.  |
   |  |   | 227m 15s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.fs.viewfs.TestViewFsHdfs |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestMiniDFSCluster |
   |   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.fs.viewfs.TestViewFileSystemWithAcls |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
   |   | hadoop.fs.TestEnhancedByteBufferAccess |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover |
   |   | hadoop.hdfs.server.diskbalancer.TestConnectors |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1832 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 967dc8cb56b9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fafe78f |
   | Default Java | 1.8.0_242 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/patch-compile-hadoop-hdfs-project.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/patch-compile-hadoop-hdfs-project.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4229 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs 

[GitHub] [hadoop] hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#issuecomment-583380344
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  27m 52s |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 50s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 47s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 15s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 15s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 15s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 14s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 16s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 19s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 18s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 18s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  71m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 7745ce42e0ec 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fafe78f |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/branch-mvninstall-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032336#comment-17032336
 ] 

Hadoop QA commented on HADOOP-16847:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992820/HADOOP-16847-000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f2e18c8dde45 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fafe78f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16758/testReport/ |
| Max. process+thread count | 1387 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16758/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
>   

[GitHub] [hadoop] hadoop-yetus commented on issue #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1826: HADOOP-16823. Manage S3 Throttling 
exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-583367137
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  9s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 25s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 41s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 35s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 57s |  trunk passed  |
   | -0 :warning: |  patch  |   2m 22s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 46s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 15s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 36s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 127m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux fb3047deee9a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7dac7e1 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on issue #1835: HADOOP-16847.Test can fail if HashSet iterates in a different order

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1835: HADOOP-16847.Test can fail if HashSet 
iterates in a different order
URL: https://github.com/apache/hadoop/pull/1835#issuecomment-583365773
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 26s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 32s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m  6s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 43s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 122m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1835 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bdd9d1487aa6 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7dac7e1 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/testReport/ |
   | Max. process+thread count | 1384 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2020-02-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16410.
-
Resolution: Fixed

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-07 Thread GitBox
hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#issuecomment-583353780
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 13s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 14s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 14s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 11s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 16s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 16s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 16s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 15s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 73f05f3de826 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fafe78f |
   | Default Java | 1.8.0_232 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/testReport/ |
   | Max. process+thread count | 311 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[jira] [Commented] (HADOOP-16834) Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java

2020-02-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032302#comment-17032302
 ] 

Hudson commented on HADOOP-16834:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17929/])
HADOOP-16834. Replace com.sun.istack.Nullable with (aajisaka: rev 
3ebf5059651658dd5ed5dbc5fcba4e814b55c34c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java


> Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java
> --
>
> Key: HADOOP-16834
> URL: https://issues.apache.org/jira/browse/HADOOP-16834
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16834.001.patch, HADOOP-16834.002.patch
>
>
> com.sun.istack.Nullable is used in only DNS.java and 
> javax.annotation.Nullable is widely used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16825) ITestAzureBlobFileSystemCheckAccess failing

2020-02-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032300#comment-17032300
 ] 

Hudson commented on HADOOP-16825:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17929/])
HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed (tmarq: 
rev 5944d28130925fe1452f545e96b5e44f064bc69e)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java


> ITestAzureBlobFileSystemCheckAccess failing
> ---
>
> Key: HADOOP-16825
> URL: https://issues.apache.org/jira/browse/HADOOP-16825
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.3.0
>
>
> Tests added in HADOOP-16455 are failing.
> java.lang.IllegalArgumentException: The value of property 
> fs.azure.account.oauth2.client.id must not be null
> Looks to me like there are new configuration options which are undocumented
> # these need documentation in testing markdown file
> # tests MUST downgrade to skip if not set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16845) ITestAbfsClient.testContinuationTokenHavingEqualSign failing

2020-02-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032299#comment-17032299
 ] 

Hudson commented on HADOOP-16845:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17929/])
HADOOP-16845: Disable (tmarq: rev 55f2421580678a6793c8cb6ad10fee3f4ec833aa)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java


> ITestAbfsClient.testContinuationTokenHavingEqualSign failing
> 
>
> Key: HADOOP-16845
> URL: https://issues.apache.org/jira/browse/HADOOP-16845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>
> Testcase testContinuationTokenHavingEqualSign is failing as request that was 
> expected to fail is passing.
> There is change in the queryparam validation in ContinuationToken at server 
> end wihch has resulted in this behaviour. 
> Server request trace:
> 2020-02-05 16:59:17,001 DEBUG [JUnit-testContinuationTokenHavingEqualSign]: 
> services.AbfsClient (AbfsRestOperation.java:executeHttpOperation(263)) - 
> HttpRequest: 
> 200,,cid=87c3ebea-def7-4fdd-a21a-a56c63a59387,rid=0931c565-201f-004c-1317-dcdd9000,sent=0,recv=0,GET,[https://snvijayaabfsns.dfs.core.windows.net/abfs-testcontainer-85bb9523-fccd-45f3-ae6d-37622d8231e5?upn=false=filesystem=500=/=%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D=90=true]
>  
> Disabling the test until the server fix is in and deployed on all regions.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16596) [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032301#comment-17032301
 ] 

Hudson commented on HADOOP-16596:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17929/])
HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from (github: rev 
7dac7e1d13eaf0eac04fe805c7502dcecd597979)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/ipc/TestRPCUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetSubClustersInfoRequestPBImpl.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/protocolPB/HSAdminRefreshProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/ContainerUpdateResponsePBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializerPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/EnableNameserviceResponsePBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetLocalizationStatusesResponsePBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/records/impl/pb/ContainerStartDataPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/LogAggregationReportPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RouterHeartbeatRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/MoveApplicationAcrossQueuesRequestPBImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationResourceUsageReportPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceManagerAdministrationProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/records/impl/pb/ContainerFinishDataPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/SCMAdminProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetNamespaceInfoResponsePBImpl.java
* (edit) 

[jira] [Commented] (HADOOP-16834) Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java

2020-02-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032289#comment-17032289
 ] 

Hadoop QA commented on HADOOP-16834:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-16834 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992848/HADOOP-16834.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16757/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java
> --
>
> Key: HADOOP-16834
> URL: https://issues.apache.org/jira/browse/HADOOP-16834
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16834.001.patch, HADOOP-16834.002.patch
>
>
> com.sun.istack.Nullable is used in only DNS.java and 
> javax.annotation.Nullable is widely used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #1828: Bump checkstyle from 8.26 to 8.29

2020-02-07 Thread GitBox
aajisaka merged pull request #1828: Bump checkstyle from 8.26 to 8.29
URL: https://github.com/apache/hadoop/pull/1828
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16834) Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java

2020-02-07 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032273#comment-17032273
 ] 

Akira Ajisaka commented on HADOOP-16834:


+1

> Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java
> --
>
> Key: HADOOP-16834
> URL: https://issues.apache.org/jira/browse/HADOOP-16834
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16834.001.patch, HADOOP-16834.002.patch
>
>
> com.sun.istack.Nullable is used in only DNS.java and 
> javax.annotation.Nullable is widely used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16596) [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16596.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
 Release Note: All protobuf classes will be used from 
hadooop-shaded-protobuf_3_7 artifact with package prefix as 
'org.apache.hadoop.thirdparty.protobuf' instead of 'com.google.protobuf'
   Resolution: Fixed

Merged to trunk. Thanks everyone for reviews

> [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency
> --
>
> Key: HADOOP-16596
> URL: https://issues.apache.org/jira/browse/HADOOP-16596
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Use the shaded protobuf classes from "hadoop-thirdparty" in hadoop codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread GitBox
vinayakumarb commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from hadoop-thirdparty dependency
URL: https://github.com/apache/hadoop/pull/1635#issuecomment-583304100
 
 
   Merged to trunk, Thanks Everyone for reviews.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb merged pull request #1635: HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread GitBox
vinayakumarb merged pull request #1635: HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from hadoop-thirdparty dependency
URL: https://github.com/apache/hadoop/pull/1635
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread GitBox
vinayakumarb commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from hadoop-thirdparty dependency
URL: https://github.com/apache/hadoop/pull/1635#issuecomment-583302737
 
 
   Thanks @oza for confirmation.
   Also thanks @ayushtkn  for reviews.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032202#comment-17032202
 ] 

John Zhuge commented on HADOOP-12990:
-

[~redriver] since Hadoop is not involved, you might want to file a Spark JIRA 
and continue the discussion there.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032199#comment-17032199
 ] 

John Zhuge edited comment on HADOOP-12990 at 2/7/20 8:35 AM:
-

OOM usually indicates format mismatch, e.g., reading a large block size, then 
trying to allocate memory.

After looking into Spark code, I realized I was wrong about using Hadoop codec. 
Spark uses its own LZ4 codec based on 
[lz4-java|https://github.com/lz4/lz4-java]. Check out [LZ4CompressionCodec in 
2.3.4|https://github.com/apache/spark/blob/v2.3.4/core/src/main/scala/org/apache/spark/io/CompressionCodec.scala#L113-L124].

Its javadoc points out:
{quote} * @note The wire protocol for this codec is not guaranteed to be 
compatible across versions
 * of Spark. This is intended for use as an internal compression utility within 
a single Spark
 * application.{quote}
Not sure whether lz4-java LZ4BlockOutputStream output can be read by Linux lz4 
tool.

Your best bet may be writing a Java decompression application with a compatible 
version of lz4-java, e.g., Spark 2.3 uses lz4-java 1.4.0.


was (Author: jzhuge):
OOM usually indicates format mismatch, e.g., reading a large block size, then 
trying to allocate memory.

After looking into Spark code, I realized I was wrong about using Hadoop codec. 
Spark uses its own LZ4 codec base on 
[lz4-java|https://github.com/lz4/lz4-java]. Check out [LZ4CompressionCodec in 
2.3.4|https://github.com/apache/spark/blob/v2.3.4/core/src/main/scala/org/apache/spark/io/CompressionCodec.scala#L113-L124].

Its javadoc points out:
{quote} * @note The wire protocol for this codec is not guaranteed to be 
compatible across versions
 * of Spark. This is intended for use as an internal compression utility within 
a single Spark
 * application.{quote}
Not sure whether lz4-java LZ4BlockOutputStream output can be read by Linux lz4 
tool.

Your best bet may be writing a Java decompression program with a matching or 
compatible version of lz4-java, e.g., Spark 2.3 uses lz4-java 1.4.0.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 

[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2020-02-07 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17032199#comment-17032199
 ] 

John Zhuge commented on HADOOP-12990:
-

OOM usually indicates format mismatch, e.g., reading a large block size, then 
trying to allocate memory.

After looking into Spark code, I realized I was wrong about using Hadoop codec. 
Spark uses its own LZ4 codec base on 
[lz4-java|https://github.com/lz4/lz4-java]. Check out [LZ4CompressionCodec in 
2.3.4|https://github.com/apache/spark/blob/v2.3.4/core/src/main/scala/org/apache/spark/io/CompressionCodec.scala#L113-L124].

Its javadoc points out:
{quote} * @note The wire protocol for this codec is not guaranteed to be 
compatible across versions
 * of Spark. This is intended for use as an internal compression utility within 
a single Spark
 * application.{quote}
Not sure whether lz4-java LZ4BlockOutputStream output can be read by Linux lz4 
tool.

Your best bet may be writing a Java decompression program with a matching or 
compatible version of lz4-java, e.g., Spark 2.3 uses lz4-java 1.4.0.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org