[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975859#comment-14975859
 ] 

Hudson commented on HADOOP-12457:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #541 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/541/])
HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc. (ozawa: rev 
ea6b183a1a649ad2874050ade8856286728c654c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975901#comment-14975901
 ] 

Hudson commented on HADOOP-12457:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1325 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1325/])
HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc. (ozawa: rev 
ea6b183a1a649ad2874050ade8856286728c654c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java


> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975907#comment-14975907
 ] 

Hadoop QA commented on HADOOP-12040:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s 
{color} | {color:red} Patch generated 2 new checkstyle issues in root (total 
was 118, now 117). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 43s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 214m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.security.TestShellBasedIdMapping |
|   | hadoop.ipc.TestIPC |
|   | hadoop.test.TestTimedOutTestsList

[jira] [Commented] (HADOOP-12468) Partial group resolution failure should not result in user lockout

2015-10-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975933#comment-14975933
 ] 

Harsh J commented on HADOOP-12468:
--

The stderr seems worthy to parse to spot the bad groups specifically and ignore 
them (vs. ignoring all numeric groups, which may result in a regression in some 
user clusters).

Alternatively, the command {{id -G}} (no {{-n}}) returns numeric IDs. We could 
match its results against {{id -Gn}} output to filter out unknown groups. The 
assumption here is that the ordering will be the same.

> Partial group resolution failure should not result in user lockout
> --
>
> Key: HADOOP-12468
> URL: https://issues.apache.org/jira/browse/HADOOP-12468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.1
> Environment: Linux
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12468.001.patch, HADOOP-12468.002.patch, 
> HADOOP-12468.003.patch
>
>
> If a Hadoop cluster is configured to use ShellBasedUnixGroupsMapping for 
> user/group name mapping, occasionally some group names may become 
> unresolvable (for example, using SSSD). 
> ShellBasedUnixGroupsMapping uses shell command "id -Gn" to retrieve the group 
> name of a user; however, the existing logic assumes that if the exit code of 
> the command is non-zero, the user has no group name at all. The shell command 
> in Linux returns non-zero exit code if a group name is not resolvable. 
> Unfortunately, it is possible that a user belongs to multiple groups, and any 
> partial failure in group name resolution would denied the user's access.
> On the other hand, the JNI implementation (JniBasedUnixGroupsMapping) is more 
> resilient. If any group name is unresolvable, it is simply ignored, and 
> whatever are resolvable are returned.
> It is arguable that if the group name is not resolvable, the administrator 
> should configure their directory/authentication service correctly, and Hadoop 
> is in no position to handle it, but since the existing unit tests assume the 
> output of JNI-based and shell-based implementation are the same, we should 
> improve the shell-based group name resolution, and make it as resilient as 
> the JNI-based one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975939#comment-14975939
 ] 

Hadoop QA commented on HADOOP-11887:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} Patch generated 5 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 7, now 12). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 25s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.sink.TestFileSink |
|   | hadoop.metrics2.sink.TestFileSink |
| JDK v1.7.0_79 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/a

[jira] [Updated] (HADOOP-12512) hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger

2015-10-27 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-12512:
---
Attachment: HADOOP-12512.001.patch

Patch

> hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger 
> --
>
> Key: HADOOP-12512
> URL: https://issues.apache.org/jira/browse/HADOOP-12512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Prabhu Joseph
> Attachments: HADOOP-12512.001.patch
>
>
> hadoop fs -ls / fails with below error when we use Custom 
> -Dhadoop.root.logger that creates Configuration object and adds 
> defaultResource custom-conf.xml with quiet = false.
> custom-conf.xml is optional configuration.
> Exception in thread "main" java.lang.RuntimeException: custom-conf.xml not 
> found
> at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2612)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
> at
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
> at
> org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
> at
> org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
> at
> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:170)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:153)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
>   
>   
> ISSUE:
> ##
> There is a Logic issue in Configuration class and defaultResources list.
> Configuration is shared by classes. Configuration has a shared list of 
> default resources added by the classes.
> If class A wants x,y,z resources and says all are optional using quiet = 
> false, Configuration loads if they are present else skips and adds them to 
> list.
> Now shared list i.e defaultResources has x,y,z
> Now if class B wants x resource and says it as mandatory , loadResources 
> scans the entire list and treats them mandatory. So during scan of y, it will 
> fail.
> where A is Custom Class
> and B is FsShell
> FsShell checks for custom-conf.xml and treats it mandatory and fails.
> 1. The mandatory/optional has to be resource wise. [OR]
> 2. defaultResources should not be shared. 
> Both of them looks complex. And simple fix is the below.
> 1. when loadResource skips initially if resource not found, it has to remove 
> the entry from defaultResource list as well. There is no use in having a 
> resource 
> which is not at classpath in the list.
> CODE CHANGE:  class org.apache.hadoop.conf.Configuration
> 
> private Resource loadResource(Properties properties, Resource wrapper, boolean
> quiet) {}
>  ..
>  if (root == null) {
> if (doc == null) {
>   if (quiet) {
> defaultResources.remove(resource);  // FIX: During skip, remove
> Resource from shared list 
> return null;
>   }
> throw new RuntimeException(resource + " not found");
> }
>  root = doc.getDocumentElement();
>  }
>  
> Tested after code fix, runs successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12512) hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger

2015-10-27 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-12512:
---
Status: Patch Available  (was: Open)

> hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger 
> --
>
> Key: HADOOP-12512
> URL: https://issues.apache.org/jira/browse/HADOOP-12512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Prabhu Joseph
> Attachments: HADOOP-12512.001.patch
>
>
> hadoop fs -ls / fails with below error when we use Custom 
> -Dhadoop.root.logger that creates Configuration object and adds 
> defaultResource custom-conf.xml with quiet = false.
> custom-conf.xml is optional configuration.
> Exception in thread "main" java.lang.RuntimeException: custom-conf.xml not 
> found
> at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2612)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
> at
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
> at
> org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
> at
> org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
> at
> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:170)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:153)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
>   
>   
> ISSUE:
> ##
> There is a Logic issue in Configuration class and defaultResources list.
> Configuration is shared by classes. Configuration has a shared list of 
> default resources added by the classes.
> If class A wants x,y,z resources and says all are optional using quiet = 
> false, Configuration loads if they are present else skips and adds them to 
> list.
> Now shared list i.e defaultResources has x,y,z
> Now if class B wants x resource and says it as mandatory , loadResources 
> scans the entire list and treats them mandatory. So during scan of y, it will 
> fail.
> where A is Custom Class
> and B is FsShell
> FsShell checks for custom-conf.xml and treats it mandatory and fails.
> 1. The mandatory/optional has to be resource wise. [OR]
> 2. defaultResources should not be shared. 
> Both of them looks complex. And simple fix is the below.
> 1. when loadResource skips initially if resource not found, it has to remove 
> the entry from defaultResource list as well. There is no use in having a 
> resource 
> which is not at classpath in the list.
> CODE CHANGE:  class org.apache.hadoop.conf.Configuration
> 
> private Resource loadResource(Properties properties, Resource wrapper, boolean
> quiet) {}
>  ..
>  if (root == null) {
> if (doc == null) {
>   if (quiet) {
> defaultResources.remove(resource);  // FIX: During skip, remove
> Resource from shared list 
> return null;
>   }
> throw new RuntimeException(resource + " not found");
> }
>  root = doc.getDocumentElement();
>  }
>  
> Tested after code fix, runs successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976012#comment-14976012
 ] 

Hadoop QA commented on HADOOP-12514:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 35s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768839/HADOOP-12514.000.patch
 |
| JIRA Issue | HADOOP-12514 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 688f863fa685 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 96677be |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/ja

[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-27 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976024#comment-14976024
 ] 

Yi Liu commented on HADOOP-12040:
-

Generally looks good, Kai.

1. You need to cleanup the checkstype issue. For example, some line is longer 
than 80 characters. 
2. Some related tests show failure, such as TestRecoverStripedFile
3. 
{code}+for (int i = 0; i < erasedIndexes.length; i++) {
+  if (erasedIndexes[i] >= getNumDataUnits()) {
+erasedIndexes2[idx++] = erasedIndexes[i] - getNumDataUnits();
+numErasedParityUnits++;
+  }
+}
+for (int i = 0; i < erasedIndexes.length; i++) {
+  if (erasedIndexes[i] < getNumDataUnits()) {
+erasedIndexes2[idx++] = erasedIndexes[i] + getNumParityUnits();
+numErasedDataUnits++;
+  }
+}
{code}
This can be done in a {{for}}.



> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12366) expose calculated paths

2015-10-27 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976046#comment-14976046
 ] 

Varun Vasudev commented on HADOOP-12366:


Thanks for the patch [~aw]!

Couple of things -
1) The patch doesn't apply cleanly to trunk any more. I see issues with 'hdfs'.
2.) Some of the variables printed out(such as JAVA_HOME, HADOOP_COMMON_HOME and 
HADOOP_CONF_DIR) have absolute paths while some(such as HADOOP_COMMON_DIR, 
HADOOP_COMMON_LIB_JARS_DIR and HADOOP_COMMON_LIB_NATIVE_DIR) have relative 
paths. Do you think the difference matters or do we expect users to know what 
the individual variables are referring to?

The rest of the patch looks good.

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12366.00.patch
>
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976048#comment-14976048
 ] 

Steve Loughran commented on HADOOP-12514:
-

+1

> Make static fields in GenericTestUtils for assertExceptionContains() 
> package-private and final
> --
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose and {{TestGenericTestUtils}} is in 
> the same package as {{GenericTestUtils}}.
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12509) org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing

2015-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976050#comment-14976050
 ] 

Steve Loughran commented on HADOOP-12509:
-

Dan, this was the one failing on Jenkins; I have enough of those without 
worrying about the details of adjacent tests.

Like you say, cleanup is separate

> org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing
> ---
>
> Key: HADOOP-12509
> URL: https://issues.apache.org/jira/browse/HADOOP-12509
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12509-001.patch, Hadoop-common-trunk-Java8 #594 
> test - testKeyACLs [Jenkins].pdf
>
>
> Failure of Jenkins in trunk, test 
> {{org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12515) hadoop-kafka module doesn't resolve mockito related classes after imported into Intellij IDEA

2015-10-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12515:

Status: Patch Available  (was: Open)

> hadoop-kafka module doesn't resolve mockito related classes after imported 
> into Intellij IDEA
> -
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-27 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976060#comment-14976060
 ] 

Walter Su commented on HADOOP-12040:


Thanks [~drankye]! The v3 patch LGTM. (Non-binding)

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-27 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976073#comment-14976073
 ] 

Walter Su commented on HADOOP-12040:


Thanks [~hitliuyi] for pointing out the style issues.
BTW, I think the failure of {{TestRecoverStripedFile}} is not related to this. 
With the patch It passes locally. The occasional failure could have the same 
reason as HDFS-9275.

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-10-27 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976118#comment-14976118
 ] 

Varun Vasudev commented on HADOOP-10787:


Thank for the patches [~aw]. The latest patch doesn't apply to trunk. Can you 
please take a look?

Also, mapred still has a reference to TOOL_PATH
{code}
  archive-logs)
CLASS=org.apache.hadoop.tools.HadoopArchiveLogs
hadoop_debug "Injecting TOOL_PATH into CLASSPATH"
hadoop_add_classpath "${TOOL_PATH}"
hadoop_debug "Appending HADOOP_CLIENT_OPTS onto HADOOP_OPTS"
HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CLIENT_OPTS}"
{code}

> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) hadoop-kafka module doesn't resolve mockito related classes after imported into Intellij IDEA

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976199#comment-14976199
 ] 

Hadoop QA commented on HADOOP-12515:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-kafka in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-kafka in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 7m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768869/HADOOP-12515-v1.patch 
|
| JIRA Issue | HADOOP-12515 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  |
| uname | Linux 52f9a22cff26 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 96677be |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7950/testReport/ |
| Max memory used | 228MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7950/console |


This message was automatically generated.



> hadoop-kafka module doesn't resolve mockito related classes after imported 
> into Intellij IDEA
> -
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> 

[jira] [Commented] (HADOOP-12512) hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976227#comment-14976227
 ] 

Hadoop QA commented on HADOOP-12512:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 176, now 177). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 23s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.ipc.TestIPC |
|   | hadoop.metrics2.sink.TestFileSink |
|   | hadoop.metrics2.sink.TestFileSink |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768930/HADOOP-12512.001.patch
 |
| JIRA Issue | HADOOP-12512 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 4a7655589616 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-sla

[jira] [Commented] (HADOOP-12515) hadoop-kafka module doesn't resolve mockito related classes after imported into Intellij IDEA

2015-10-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976228#comment-14976228
 ] 

Akira AJISAKA commented on HADOOP-12515:


LGTM, +1.

> hadoop-kafka module doesn't resolve mockito related classes after imported 
> into Intellij IDEA
> -
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12515:
---
Target Version/s: 3.0.0
Priority: Major  (was: Minor)
Hadoop Flags: Reviewed
 Component/s: test
  Issue Type: Bug  (was: Test)
 Summary: Mockito dependency is missing in hadoop-kafka module  
(was: hadoop-kafka module doesn't resolve mockito related classes after 
imported into Intellij IDEA)

> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976239#comment-14976239
 ] 

Akira AJISAKA commented on HADOOP-12515:


I've committed this to trunk. Thanks [~drankye] for the contribution.

> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12515:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976249#comment-14976249
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8713 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8713/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-kafka/pom.xml


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Attachment: HADOOP-11887-v8.patch

Updated the patch addressing the found checking style issues.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch, 
> HADOOP-11887-v8.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976271#comment-14976271
 ] 

Kai Zheng commented on HADOOP-11887:


Colin, would you help review this one more time? Thanks!

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch, 
> HADOOP-11887-v8.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976273#comment-14976273
 ] 

Kai Zheng commented on HADOOP-12515:


Thanks Akira for the review and commit!

> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976299#comment-14976299
 ] 

Kai Zheng commented on HADOOP-12040:


Thanks Yi and Walter for the review and comments! I will double check the 
mentioned issues.

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976340#comment-14976340
 ] 

Kai Zheng commented on HADOOP-11887:


Sorry looks like I missed some changes for Windows when rebased. Will fix this 
and test for Windows tommorrow.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch, 
> HADOOP-11887-v8.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12512) hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger

2015-10-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976388#comment-14976388
 ] 

Daniel Templeton commented on HADOOP-12512:
---

Let me make sure I understand the issue.  You have a custom root logger that 
uses an optional config file.  When Log4j instantiates your logger, you add the 
optional config file to the list of default resources on the Configuration.  
Then, when the FsShell starts, it tries to load your config file, fails, and 
throws an exception.  Is that correct?

Why are you adding your file as a default resource and not a regular resource 
(addResource()) ?  That's not the source of the issue, but I'm curious.

Looking at the code, it looks to me like the first thing FsShell does is load 
all resources with quiet=false, meaning at that time there shouldn't be 
anything optional in the resource list.  All subsequent resource loading is 
done with quiet=true, allowing optional resources.  It seems to me the issue is 
that you're adding the resource too early.  Is it possible to do the 
initialization of your class lazily so that it doesn't go hunting for resource 
files until it's past the FsShell startup?

> hadoop fs -ls / fails when we use Custom -Dhadoop.root.logger 
> --
>
> Key: HADOOP-12512
> URL: https://issues.apache.org/jira/browse/HADOOP-12512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Prabhu Joseph
> Attachments: HADOOP-12512.001.patch
>
>
> hadoop fs -ls / fails with below error when we use Custom 
> -Dhadoop.root.logger that creates Configuration object and adds 
> defaultResource custom-conf.xml with quiet = false.
> custom-conf.xml is optional configuration.
> Exception in thread "main" java.lang.RuntimeException: custom-conf.xml not 
> found
> at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2612)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
> at
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
> at
> org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
> at
> org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
> at
> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:170)
> at
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:153)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
>   
>   
> ISSUE:
> ##
> There is a Logic issue in Configuration class and defaultResources list.
> Configuration is shared by classes. Configuration has a shared list of 
> default resources added by the classes.
> If class A wants x,y,z resources and says all are optional using quiet = 
> false, Configuration loads if they are present else skips and adds them to 
> list.
> Now shared list i.e defaultResources has x,y,z
> Now if class B wants x resource and says it as mandatory , loadResources 
> scans the entire list and treats them mandatory. So during scan of y, it will 
> fail.
> where A is Custom Class
> and B is FsShell
> FsShell checks for custom-conf.xml and treats it mandatory and fails.
> 1. The mandatory/optional has to be resource wise. [OR]
> 2. defaultResources should not be shared. 
> Both of them looks complex. And simple fix is the below.
> 1. when loadResource skips initially if resource not found, it has to remove 
> the entry from defaultResource list as well. There is no use in having a 
> resource 
> which is not at classpath in the list.
> CODE CHANGE:  class org.apache.hadoop.conf.Configuration
> 
> private Resource loadResource(Properties properties, Resource wrapper, boolean
> quiet) {}
>  ..
>  if (root == null) {
> if (doc == null) {
>   if (quiet) {
> defaultResources.remove(resource);  // FIX: During skip, remove
> Resource from shared list 
> return null;
>   }
> throw new RuntimeException(resource + " not found");
> }
>  root = doc.getDocumentElement();
>  }
>  
> Tested after code fix, runs successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976394#comment-14976394
 ] 

Hadoop QA commented on HADOOP-11887:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 7, now 7). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 5s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 19s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.security.TestShellBasedIdMapping |
|   | hadoop.metrics2.sink.TestFileSink |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768960/HADOOP-11887-v8.patch 
|
| JIRA Issue | HADOOP-11887 |
| Optional Tests |  asflicense  javac  javadoc  mvnin

[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976400#comment-14976400
 ] 

Hudson commented on HADOOP-12515:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #602 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/602/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-tools/hadoop-kafka/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976456#comment-14976456
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1326 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1326/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-tools/hadoop-kafka/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976507#comment-14976507
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2533 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2533/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-tools/hadoop-kafka/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976517#comment-14976517
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #590 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/590/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-kafka/pom.xml


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976599#comment-14976599
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #542 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/542/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-kafka/pom.xml


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-10-27 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976619#comment-14976619
 ] 

Tony Wu commented on HADOOP-12482:
--

Manually ran the failed tests on Linux using JDK 1.7, all tests pass without 
error.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-10-27 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Attachment: (was: HADOOP-12321-005-aggregated.patch)

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-10-27 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Attachment: HADOOP-12321-005-aggregated.patch

Reattaching patch to see jenkins report again.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12487) DomainSocket.close() assumes incorrect Linux behaviour

2015-10-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976634#comment-14976634
 ] 

Alan Burlison commented on HADOOP-12487:


If Al Viro thought the suggestions were without merit then I'd have expected 
him to have simply terminated the discussion, which he hasn't. The current 
Linux shutdown() and close() behaviour are not POSIX compliant. Whether or not 
the Linux developers think it's worth complying with POSIX in this area is an 
interesting discussion, but not really pertinent here as even if it does 
change, Hadoop has to deal with the current Linux behaviour.

I understand your *theoretical* race scenario however after looking at the 
Hadoop code carefully I can't see how it will ever occur *in practice*. 
DomainSocket uses a CloseableReferenceCount to make sure that once the FD 
encapsulated by the DomainSocket is closed then it isn't used any more. It 
therefore doesn't matter if the FD is recycled elsewhere, because the copy of 
it inside the DomainSocket is 'dead' and therefore irrelevant.

All the read/write uses of the FD are made from inside DomainSocket as far as I 
can tell, and are therefore protected by the CloseableReferenceCount. However 
the fd field itself is not private. As far as I can tell the only place the FD 
is used externally to DomainSocket is from within DomainSocketWatcher and 
again, as far as I can tell it's only used for poll(), DomainSocketWatcher 
doesn't read or write to it, doesn't open any FDs itself and re-validates the 
DomainSocket FDs it uses by calling sock.refCount.unreferenceCheckClosed() etc, 
so I think that is safe as well.

Unless I've missed something, I believe the close routine in DomainSocket() 
could be changed to call shutdown() immediately followed by close(), then wait 
for the CloseableReferenceCount to reach zero and return. If we ignore any 
failures from shutdown() I think that would then work on both Linux and Solaris.

If I'm missing a pitfall here somewhere here I'd be grateful if you could point 
it out, thanks.

> DomainSocket.close() assumes incorrect Linux behaviour
> --
>
> Key: HADOOP-12487
> URL: https://issues.apache.org/jira/browse/HADOOP-12487
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Linux Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: shutdown.c
>
>
> I'm getting a test failure in TestDomainSocket.java, in the 
> testSocketAcceptAndClose test. That test creates a socket which one thread 
> waits on in DomainSocket.accept() whilst a second thread sleeps for a short 
> time before closing the same socket with DomainSocket.close().
> DomainSocket.close() first calls shutdown0() on the socket before closing 
> close0() - both those are thin wrappers around the corresponding libc socket 
> calls. DomainSocket.close() contains the following comment, explaining the 
> logic involved:
> {code}
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
> {code}
> Unfortunately that relies on non-standards-compliant Linux behaviour. I've 
> written a simple C test case that replicates the scenario above:
> # ThreadA opens, binds, listens and accepts on a socket, waiting for 
> connections.
> # Some time later ThreadB calls shutdown on the socket ThreadA is waiting in 
> accept on.
> Here is what happens:
> On Linux, the shutdown call in ThreadB succeeds and the accept call in 
> ThreadA returns with EINVAL.
> On Solaris, the shutdown call in ThreadB fails and returns ENOTCONN. ThreadA 
> continues to wait in accept.
> Relevant POSIX manpages:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/accept.html
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/shutdown.html
> The POSIX shutdown manpage says:
> "The shutdown() function shall cause all or part of a full-duplex connection 
> on the socket associated with the file descriptor socket to be shut down."
> ...
> "\[ENOTCONN] The socket is not connected."
> Page 229 & 303 of "UNIX System V Network Programming" say:
> "shutdown can only be called on sockets that have been previously connected"
> "The socket \[passed to accept that] fd refers to does not participate in the 
> connection. It remains available to receive further connect indications"
> That is pretty clear, sockets being waited on with accept are not connected 
> by definition. Nor is it the accept socket connected when a client connects 
> to it, it is the socket returned by accept that is connected to the client. 
> Therefore the Solaris behaviour of failing the shutdown call is correct.
> In order to get the

[jira] [Updated] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12178:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976762#comment-14976762
 ] 

zhihai xu commented on HADOOP-12178:


Committed it to trunk and branch-2, thanks [~steve_l] for the contribution! 
thanks [~hitliuyi] for the review!

> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12178:
---
Hadoop Flags: Reviewed

> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12515) Mockito dependency is missing in hadoop-kafka module

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976784#comment-14976784
 ] 

Hudson commented on HADOOP-12515:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2480 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2480/])
HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. (aajisaka: 
rev bcb2386e39433a81f3bf4470b0a425292f47aa73)
* hadoop-tools/hadoop-kafka/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Mockito dependency is missing in hadoop-kafka module
> 
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.06.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Commented] (HADOOP-12366) expose calculated paths

2015-10-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976843#comment-14976843
 ] 

Allen Wittenauer commented on HADOOP-12366:
---

Thanks.  I'll rebase here in a sec.

FWIW, I never did the research to figure out why back in 0.21 during the 
project split why the _DIR vars were made relative to their _HOME's.  Worse 
yet, it was never really documented very well that these vars 
existed/what-they-did, yet I know vendors are (ab)using them.  As part of 
HADOOP-9902, I actually did add some information on what these vars are/do (in 
addition to a bunch more made during the project split) in 
hadoop-layout.sh.example.

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12366.00.patch
>
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976872#comment-14976872
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8715 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8715/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10787:
--
Attachment: HADOOP-10787.03.patch

-03:
* rebase
* fix some new tool path bits that were added since last patch creation

> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch, HADOOP-10787.03.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-10-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976920#comment-14976920
 ] 

Allen Wittenauer commented on HADOOP-10787:
---

Thanks for the review.  This should fix the issues you discovered. :)

> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch, HADOOP-10787.03.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12366) expose calculated paths

2015-10-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976938#comment-14976938
 ] 

Allen Wittenauer commented on HADOOP-12366:
---

Actually, I'm going to wait to rebase until  HADOOP-10787 is committed, since 
this patch is blocked by that patch.

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12366.00.patch
>
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12494:
--
Hadoop Flags:   (was: Incompatible change)

+1 committing to trunk

also removing the incompatible flag since the incompatibility was introduced in 
trunk.

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12494:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12487) DomainSocket.close() assumes incorrect Linux behaviour

2015-10-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976974#comment-14976974
 ] 

Colin Patrick McCabe commented on HADOOP-12487:
---

OK.  I understand the question now.  What are you missing is that the reference 
count will often never reach 0 unless the socket can be {{shutdown}}.  For 
example, we hold the refcount while inside the {{accept}} function.  If we 
can't break out of {{accept}} via {{shutdown}}, we'll never get a chance to 
call {{close}} since the refcount will never get to 0.  I think you might have 
to go for a {{select}} / {{poll}} based solution on Solaris unless there is 
actually some way to break out of that {{accept}}.

> DomainSocket.close() assumes incorrect Linux behaviour
> --
>
> Key: HADOOP-12487
> URL: https://issues.apache.org/jira/browse/HADOOP-12487
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Linux Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: shutdown.c
>
>
> I'm getting a test failure in TestDomainSocket.java, in the 
> testSocketAcceptAndClose test. That test creates a socket which one thread 
> waits on in DomainSocket.accept() whilst a second thread sleeps for a short 
> time before closing the same socket with DomainSocket.close().
> DomainSocket.close() first calls shutdown0() on the socket before closing 
> close0() - both those are thin wrappers around the corresponding libc socket 
> calls. DomainSocket.close() contains the following comment, explaining the 
> logic involved:
> {code}
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
> {code}
> Unfortunately that relies on non-standards-compliant Linux behaviour. I've 
> written a simple C test case that replicates the scenario above:
> # ThreadA opens, binds, listens and accepts on a socket, waiting for 
> connections.
> # Some time later ThreadB calls shutdown on the socket ThreadA is waiting in 
> accept on.
> Here is what happens:
> On Linux, the shutdown call in ThreadB succeeds and the accept call in 
> ThreadA returns with EINVAL.
> On Solaris, the shutdown call in ThreadB fails and returns ENOTCONN. ThreadA 
> continues to wait in accept.
> Relevant POSIX manpages:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/accept.html
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/shutdown.html
> The POSIX shutdown manpage says:
> "The shutdown() function shall cause all or part of a full-duplex connection 
> on the socket associated with the file descriptor socket to be shut down."
> ...
> "\[ENOTCONN] The socket is not connected."
> Page 229 & 303 of "UNIX System V Network Programming" say:
> "shutdown can only be called on sockets that have been previously connected"
> "The socket \[passed to accept that] fd refers to does not participate in the 
> connection. It remains available to receive further connect indications"
> That is pretty clear, sockets being waited on with accept are not connected 
> by definition. Nor is it the accept socket connected when a client connects 
> to it, it is the socket returned by accept that is connected to the client. 
> Therefore the Solaris behaviour of failing the shutdown call is correct.
> In order to get the required behaviour of ThreadB causing ThreadA to exit the 
> accept call with an error, the correct way is for ThreadB to call close on 
> the socket that ThreadA is waiting on in accept.
> On Solaris, calling close in ThreadB succeeds, and the accept call in ThreadA 
> fails and returns EBADF.
> On Linux, calling close in ThreadB succeeds but ThreadA continues to wait in 
> accept until there is an incoming connection. That accept returns 
> successfully. However subsequent accept calls on the same socket return EBADF.
> The Linux behaviour is fundamentally broken in three places:
> # Allowing shutdown to succeed on an unconnected socket is incorrect.  
> # Returning a successful accept on a closed file descriptor is incorrect, 
> especially as future accept calls on the same socket fail.
> # Once shutdown has been called on the socket, calling close on the socket 
> fails with EBADF. That is incorrect, shutdown should just prevent further IO 
> on the socket, it should not close it.
> The real issue though is that there's no single way of doing this that works 
> on both Solaris and Linux, there will need to be platform-specific code in 
> Hadoop to cater for the Linux brokenness. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977013#comment-14977013
 ] 

Hudson commented on HADOOP-12178:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1327 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1327/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977027#comment-14977027
 ] 

Owen O'Malley commented on HADOOP-12494:


+1

> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977038#comment-14977038
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #591 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/591/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977045#comment-14977045
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #604 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/604/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-12517:
--

 Summary: Findbugs reported 0 issues, but summary 
 Key: HADOOP-12517
 URL: https://issues.apache.org/jira/browse/HADOOP-12517
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Yongjun Zhang


https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559

stated -1 for findbugs, however, 

https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html

says 0.

Thanks a lot for looking into.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977091#comment-14977091
 ] 

Hudson commented on HADOOP-12494:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #8716 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8716/])
HADOOP-12494. fetchdt stores the token based on token kind instead of (aw: rev 
1396867b52533ecf894158a464c6cd3abc7041b9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java


> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977093#comment-14977093
 ] 

Yongjun Zhang commented on HADOOP-12517:


Hi [~aw], FYI. Thanks.


> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs, however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-12517:
---
Description: 
https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559

stated -1 for findbugs (The patch appears to introduce 1 new Findbugs (version 
3.0.0) warnings.), however, 

https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html

says 0.

Thanks a lot for looking into.


  was:
https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559

stated -1 for findbugs, however, 

https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html

says 0.

Thanks a lot for looking into.



> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12518) hadoop-pipes doesn't use maven properties for openssl

2015-10-27 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12518:
-

 Summary: hadoop-pipes doesn't use maven properties for openssl
 Key: HADOOP-12518
 URL: https://issues.apache.org/jira/browse/HADOOP-12518
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/pipes
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12518.00.patch

hadoop-common has some maven properties that are used to define where OpenSSL 
lives.  hadoop-pipes should also use them so we can enable automated testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12518) hadoop-pipes doesn't use maven properties for openssl

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12518:
--
Attachment: HADOOP-12518.00.patch

> hadoop-pipes doesn't use maven properties for openssl
> -
>
> Key: HADOOP-12518
> URL: https://issues.apache.org/jira/browse/HADOOP-12518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/pipes
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12518.00.patch
>
>
> hadoop-common has some maven properties that are used to define where OpenSSL 
> lives.  hadoop-pipes should also use them so we can enable automated testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977099#comment-14977099
 ] 

Jason Lowe commented on HADOOP-12517:
-

Looks like a duplicate of HADOOP-12312.

> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   

[jira] [Assigned] (HADOOP-12518) hadoop-pipes doesn't use maven properties for openssl

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12518:
-

Assignee: Allen Wittenauer

> hadoop-pipes doesn't use maven properties for openssl
> -
>
> Key: HADOOP-12518
> URL: https://issues.apache.org/jira/browse/HADOOP-12518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/pipes
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12518.00.patch
>
>
> hadoop-common has some maven properties that are used to define where OpenSSL 
> lives.  hadoop-pipes should also use them so we can enable automated testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12518) hadoop-pipes doesn't use maven properties for openssl

2015-10-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12518:

Status: Patch Available  (was: Open)

> hadoop-pipes doesn't use maven properties for openssl
> -
>
> Key: HADOOP-12518
> URL: https://issues.apache.org/jira/browse/HADOOP-12518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/pipes
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12518.00.patch
>
>
> hadoop-common has some maven properties that are used to define where OpenSSL 
> lives.  hadoop-pipes should also use them so we can enable automated testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977128#comment-14977128
 ] 

Xiaoyu Yao commented on HADOOP-12517:
-

I've seen this recently on HDFS-8831 Jenkins report as well. Both after 
HADOOP-12312 is fixed..

> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12518) hadoop-pipes doesn't use maven properties for openssl

2015-10-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12518:
--
Attachment: HADOOP-12518.01.patch

-01:
* add openssl to the link of the examples. woops.

> hadoop-pipes doesn't use maven properties for openssl
> -
>
> Key: HADOOP-12518
> URL: https://issues.apache.org/jira/browse/HADOOP-12518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/pipes
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12518.00.patch, HADOOP-12518.01.patch
>
>
> hadoop-common has some maven properties that are used to define where OpenSSL 
> lives.  hadoop-pipes should also use them so we can enable automated testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-10-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977166#comment-14977166
 ] 

Allen Wittenauer commented on HADOOP-12114:
---

Attempting to test this on OS X resulted in a few discoveries:

* The hadoop-pipes pom and cmake setup does not support the properties that the 
rest of hadoop supports. (HADOOP-12518)
* test-patch has never tested hadoop-pipes and likely large chunks of other 
hadoop-tools parts.  I tracked it down to HADOOP-8308 for the origin of that 
code. (YETUS-138)


> Make hadoop-tools/hadoop-pipes Native code -Wall-clean
> --
>
> Key: HADOOP-12114
> URL: https://issues.apache.org/jira/browse/HADOOP-12114
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.0
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12114.001.patch, HADOOP-12114.002.patch
>
>
> As we specify -Wall as a default compilation flag, it would be helpful if the 
> Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12385:
---
Hadoop Flags: Reviewed

+1 for the patch.  Thanks for addressing the feedback, Steve.

I think Checkstyle is freaking out because of the indentation style on case 
labels in {{SaslRpcClient}}.  It currently uses this:

{code}
switch (method) {
  case TOKEN:
// Code goes here.
{code}

Checkstyle wants us to do this instead:

{code}
switch (method) {
case TOKEN:
  // Code goes here.
{code}

Your patch isn't responsible for introducing this, and I don't consider it in 
scope of this patch to reformat the whole file.

{{TestIPC}} passes locally for me.  This has been a racy test.

> include nested stack trace in SaslRpcClient.getServerToken()
> 
>
> Key: HADOOP-12385
> URL: https://issues.apache.org/jira/browse/HADOOP-12385
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12385-001.patch, HADOOP-12385-002.patch, 
> HADOOP-12385-003.patch
>
>
> The {{SaslRpcClient.getServerToken()}} method loses the stack traces when an 
> attempt to instantiate a {{TokenSelector}}. It should include them in the 
> generated exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Attachment: HADOOP-11887-v9.patch

Added the missing changes for Windows and also fixed a checking style issue.
Ready for review. Thanks.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch, 
> HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12519) hadoop-azure tests create a metrics configuration file in the module root directory.

2015-10-27 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12519:
--

 Summary: hadoop-azure tests create a metrics configuration file in 
the module root directory.
 Key: HADOOP-12519
 URL: https://issues.apache.org/jira/browse/HADOOP-12519
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


The hadoop-azure JUnit tests create a metrics configuration file.  This file 
gets saved in the root directory of the hadoop-azure module.  This dirties the 
git workspace and won't get removed by {{mvn clean}}, because it's outside of 
the build target directory.  It also can cause the pre-commit license check 
step to fail, because this ends up looking like the patch added a new file 
without the Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12519) hadoop-azure tests create a metrics configuration file in the module root directory.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12519:
---
Attachment: HADOOP-12519.001.patch

The attached patch moves the metrics configuration file into 
target/test-classes, so it's still available on the runtime classpath for 
tests, but it doesn't dirty the workspace.  This is the same approach used by 
hadoop-common tests, like {{TestMetricsSystemImpl}}.

> hadoop-azure tests create a metrics configuration file in the module root 
> directory.
> 
>
> Key: HADOOP-12519
> URL: https://issues.apache.org/jira/browse/HADOOP-12519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12519.001.patch
>
>
> The hadoop-azure JUnit tests create a metrics configuration file.  This file 
> gets saved in the root directory of the hadoop-azure module.  This dirties 
> the git workspace and won't get removed by {{mvn clean}}, because it's 
> outside of the build target directory.  It also can cause the pre-commit 
> license check step to fail, because this ends up looking like the patch added 
> a new file without the Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12519) hadoop-azure tests create a metrics configuration file in the module root directory.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12519:
---
Status: Patch Available  (was: Open)

> hadoop-azure tests create a metrics configuration file in the module root 
> directory.
> 
>
> Key: HADOOP-12519
> URL: https://issues.apache.org/jira/browse/HADOOP-12519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12519.001.patch
>
>
> The hadoop-azure JUnit tests create a metrics configuration file.  This file 
> gets saved in the root directory of the hadoop-azure module.  This dirties 
> the git workspace and won't get removed by {{mvn clean}}, because it's 
> outside of the build target directory.  It also can cause the pre-commit 
> license check step to fail, because this ends up looking like the patch added 
> a new file without the Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12519) hadoop-azure tests create a metrics configuration file in the module root directory.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12519:
---
Target Version/s: 2.8.0

> hadoop-azure tests create a metrics configuration file in the module root 
> directory.
> 
>
> Key: HADOOP-12519
> URL: https://issues.apache.org/jira/browse/HADOOP-12519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12519.001.patch
>
>
> The hadoop-azure JUnit tests create a metrics configuration file.  This file 
> gets saved in the root directory of the hadoop-azure module.  This dirties 
> the git workspace and won't get removed by {{mvn clean}}, because it's 
> outside of the build target directory.  It also can cause the pre-commit 
> license check step to fail, because this ends up looking like the patch added 
> a new file without the Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12520) Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests.

2015-10-27 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12520:
--

 Summary: Use XInclude in hadoop-azure test configuration to 
isolate Azure Storage account keys for service integration tests.
 Key: HADOOP-12520
 URL: https://issues.apache.org/jira/browse/HADOOP-12520
 Project: Hadoop Common
  Issue Type: Improvement
  Components: azure, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth


The hadoop-azure tests support execution against the live Azure Storage service 
if the developer specifies the key to an Azure Storage account.  The 
configuration works by overwriting the src/test/resources/azure-test.xml file.  
This can be an error-prone process.  The azure-test.xml file is checked into 
revision control to show an example.  There is a risk that the tester could 
overwrite azure-test.xml containing the keys and then accidentally commit the 
keys to revision control.  This would leak the keys to the world for potential 
use by an attacker.  This issue proposes to use XInclude to isolate the keys 
into a separate file, ignored by git, which will never be committed to revision 
control.  This is very similar to the setup already used by hadoop-aws for 
integration testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12520) Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12520:
---
Status: Patch Available  (was: Open)

> Use XInclude in hadoop-azure test configuration to isolate Azure Storage 
> account keys for service integration tests.
> 
>
> Key: HADOOP-12520
> URL: https://issues.apache.org/jira/browse/HADOOP-12520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12520.001.patch
>
>
> The hadoop-azure tests support execution against the live Azure Storage 
> service if the developer specifies the key to an Azure Storage account.  The 
> configuration works by overwriting the src/test/resources/azure-test.xml 
> file.  This can be an error-prone process.  The azure-test.xml file is 
> checked into revision control to show an example.  There is a risk that the 
> tester could overwrite azure-test.xml containing the keys and then 
> accidentally commit the keys to revision control.  This would leak the keys 
> to the world for potential use by an attacker.  This issue proposes to use 
> XInclude to isolate the keys into a separate file, ignored by git, which will 
> never be committed to revision control.  This is very similar to the setup 
> already used by hadoop-aws for integration testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12520) Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12520:
---
Attachment: HADOOP-12520.001.patch

I'm attaching the proposed patch.

> Use XInclude in hadoop-azure test configuration to isolate Azure Storage 
> account keys for service integration tests.
> 
>
> Key: HADOOP-12520
> URL: https://issues.apache.org/jira/browse/HADOOP-12520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12520.001.patch
>
>
> The hadoop-azure tests support execution against the live Azure Storage 
> service if the developer specifies the key to an Azure Storage account.  The 
> configuration works by overwriting the src/test/resources/azure-test.xml 
> file.  This can be an error-prone process.  The azure-test.xml file is 
> checked into revision control to show an example.  There is a risk that the 
> tester could overwrite azure-test.xml containing the keys and then 
> accidentally commit the keys to revision control.  This would leak the keys 
> to the world for potential use by an attacker.  This issue proposes to use 
> XInclude to isolate the keys into a separate file, ignored by git, which will 
> never be committed to revision control.  This is very similar to the setup 
> already used by hadoop-aws for integration testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977253#comment-14977253
 ] 

Yongjun Zhang commented on HADOOP-12517:


Thanks [~jlowe] and [~xyao] for the input, indeed we are seeing this with most 
recent builds, so may be a different issue than what HADOOP-12312 solved.


> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12521) Document

2015-10-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-12521:
---

 Summary: Document 
 Key: HADOOP-12521
 URL: https://issues.apache.org/jira/browse/HADOOP-12521
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


KMS delegation token support is added since HADOOP-10769 and HADOOP-10770. 
However, the API document at 
https://hadoop.apache.org/docs/stable/hadoop-kms/index.html is still TBD. 
{code}
Delegation Tokens
TBD
{code}

This ticket is opened to document the getdelegationtoken API with its 
Request/Response.
{code}
Request:

[hdfs@c6401 vagrant]$ curl -i --negotiate -u :  -c ~/cookiejar.txt  
"http://c6401.ambari.apache.rg:16000/kms/v1/?op=getdelegationtoken&renewer=JobTracker";
HTTP/1.1 401 Unauthorized
Server: Apache-Coyote/1.1
WWW-Authenticate: Negotiate
Content-Length: 0
Date: Tue, 27 Oct 2015 21:49:15 GMT
{code}

{code}
Response:

HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: application/json
Content-Length: 120
Date: Tue, 27 Oct 2015 21:49:15 GMT

{"Token":{"urlString":"KAAKaGRmcy1oZHA2NApKb2JUcmFja2VyAIoBUKtGzIGKAVDPU1CBBwQUKoAqp3yAQe2JrbuL26feN8Kv_EAGa21zLWR0AA"}}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12521) Document KMS getdelegationtoken API

2015-10-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12521:

Summary: Document KMS getdelegationtoken API  (was: Document )

> Document KMS getdelegationtoken API
> ---
>
> Key: HADOOP-12521
> URL: https://issues.apache.org/jira/browse/HADOOP-12521
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> KMS delegation token support is added since HADOOP-10769 and HADOOP-10770. 
> However, the API document at 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html is still TBD. 
> {code}
> Delegation Tokens
> TBD
> {code}
> This ticket is opened to document the getdelegationtoken API with its 
> Request/Response.
> {code}
> Request:
> [hdfs@c6401 vagrant]$ curl -i --negotiate -u :  -c ~/cookiejar.txt  
> "http://c6401.ambari.apache.rg:16000/kms/v1/?op=getdelegationtoken&renewer=JobTracker";
> HTTP/1.1 401 Unauthorized
> Server: Apache-Coyote/1.1
> WWW-Authenticate: Negotiate
> Content-Length: 0
> Date: Tue, 27 Oct 2015 21:49:15 GMT
> {code}
> {code}
> Response:
> HTTP/1.1 200 OK
> Server: Apache-Coyote/1.1
> Content-Type: application/json
> Content-Length: 120
> Date: Tue, 27 Oct 2015 21:49:15 GMT
> {"Token":{"urlString":"KAAKaGRmcy1oZHA2NApKb2JUcmFja2VyAIoBUKtGzIGKAVDPU1CBBwQUKoAqp3yAQe2JrbuL26feN8Kv_EAGa21zLWR0AA"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12520) Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests.

2015-10-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977280#comment-14977280
 ] 

Haohui Mai commented on HADOOP-12520:
-

+1 pending jenkins.

> Use XInclude in hadoop-azure test configuration to isolate Azure Storage 
> account keys for service integration tests.
> 
>
> Key: HADOOP-12520
> URL: https://issues.apache.org/jira/browse/HADOOP-12520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12520.001.patch
>
>
> The hadoop-azure tests support execution against the live Azure Storage 
> service if the developer specifies the key to an Azure Storage account.  The 
> configuration works by overwriting the src/test/resources/azure-test.xml 
> file.  This can be an error-prone process.  The azure-test.xml file is 
> checked into revision control to show an example.  There is a risk that the 
> tester could overwrite azure-test.xml containing the keys and then 
> accidentally commit the keys to revision control.  This would leak the keys 
> to the world for potential use by an attacker.  This issue proposes to use 
> XInclude to isolate the keys into a separate file, ignored by git, which will 
> never be committed to revision control.  This is very similar to the setup 
> already used by hadoop-aws for integration testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977289#comment-14977289
 ] 

Hadoop QA commented on HADOOP-11685:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12769055/HADOOP-11685.06.patch 
|
| JIRA Issue | HADOOP-11685 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 1d4184e62689 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-2392ab4/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 68ce93c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HAD

[jira] [Commented] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977293#comment-14977293
 ] 

Hadoop QA commented on HADOOP-10787:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-hdfs-httpfs in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-yarn in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s 
{color} | {color:green} hadoop-mapreduce-project in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 39s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-hdfs-httpfs in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-yarn in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s 
{color} | {color:green} hadoop-mapreduce-project in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed TAP tests | hadoop_add_to_classpath_toolspath.bats.tap |
|   | hadoop_add_to_classpath_toolspath.bats.tap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12769070/HADOOP-10787.03.patch 
|
| JIRA Issue | HADOOP-10787 |
| Optional Tests |  asflicense  unit  shellcheck  |
| uname | Linux af46df5af291 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build@2/patchprocess/apache-yetus-2392ab4/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 68ce93c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| shellcheck | v0.4.1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7954/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7954/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79

[jira] [Updated] (HADOOP-12519) hadoop-azure tests should avoid creating a metrics configuration file in the module root directory.

2015-10-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12519:
---
Summary: hadoop-azure tests should avoid creating a metrics configuration 
file in the module root directory.  (was: hadoop-azure tests create a metrics 
configuration file in the module root directory.)

> hadoop-azure tests should avoid creating a metrics configuration file in the 
> module root directory.
> ---
>
> Key: HADOOP-12519
> URL: https://issues.apache.org/jira/browse/HADOOP-12519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12519.001.patch
>
>
> The hadoop-azure JUnit tests create a metrics configuration file.  This file 
> gets saved in the root directory of the hadoop-azure module.  This dirties 
> the git workspace and won't get removed by {{mvn clean}}, because it's 
> outside of the build target directory.  It also can cause the pre-commit 
> license check step to fail, because this ends up looking like the patch added 
> a new file without the Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12517) Findbugs reported 0 issues, but summary

2015-10-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977304#comment-14977304
 ] 

Jason Lowe commented on HADOOP-12517:
-

I got the impression HADOOP-12312 was the same issue but only fixed in the 
HADOOP-12111 branch.  This may be a fix that never made it to trunk and is only 
in Yetus.  I don't believe Yetus is enabled on all projects yet.  Hopefully 
[~aw] can clarify.

> Findbugs reported 0 issues, but summary 
> 
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977318#comment-14977318
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2535 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2535/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977316#comment-14977316
 ] 

Hudson commented on HADOOP-12494:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2535 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2535/])
HADOOP-12494. fetchdt stores the token based on token kind instead of (aw: rev 
1396867b52533ecf894158a464c6cd3abc7041b9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977334#comment-14977334
 ] 

Chris Nauroth commented on HADOOP-11685:


Hi [~onpduo].

Patch v06 is looking pretty good.  The "asflicense" failure is unrelated, and 
it will be fixed by my HADOOP-12519 patch.

Instead of making {{NativeAzureFileSystem#createPermissionStatus}} public, I 
recommend making it package-private (no access modifier) and applying the 
{{VisibleForTesting}} annotation.

I'm comparing the v03 patch and the v06 patch, and it looks like the exception 
handling in v06 could swallow some exceptions by mistake.  v03 had an {{else}} 
block for the case when the {{IOException}} has a cause that is not a 
{{StorageException}}.  That {{else}} block has gone away in v06.  It seems this 
could swallow general I/O errors, like connection refused, when they should get 
reported back to the caller.

I had made an earlier comment about how this patch might fix the {{mkdirs}} 
only to fail at a later step in HBase log splitting.  What are your thoughts on 
this?

https://issues.apache.org/jira/browse/HADOOP-11685?focusedCommentId=14970059&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14970059

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:8

[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977340#comment-14977340
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2481 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2481/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977369#comment-14977369
 ] 

Duo Xu commented on HADOOP-11685:
-

[~cnauroth]

That is a copy-paste mistake, I will submit a new patch.

Looking at the code, createNonRecursive will acquire the lease on that folder 
before creating files under that folder. rename will acquire the lease on that 
folder before updating its metadata. 

So at one time, there could be three types of threads accessing that folder, 
one is doing step 3, one is doing step 4, and one is doing step 5. step 3 will 
not fail after the patch. step4 and step5 both acquire the lease before 
creating files/updating folder properties. Thus no thread conflicts here I 
think.

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNa

[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977383#comment-14977383
 ] 

Hudson commented on HADOOP-12494:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1328/])
HADOOP-12494. fetchdt stores the token based on token kind instead of (aw: rev 
1396867b52533ecf894158a464c6cd3abc7041b9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java


> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977404#comment-14977404
 ] 

Hudson commented on HADOOP-12178:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/544/])
HADOOP-12178. NPE during handling of SASL setup if problem with SASL (zxu: rev 
ed9806ea40b945df0637c21b68964d1d2bd204f3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977402#comment-14977402
 ] 

Hudson commented on HADOOP-12494:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/544/])
HADOOP-12494. fetchdt stores the token based on token kind instead of (aw: rev 
1396867b52533ecf894158a464c6cd3abc7041b9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java


> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977409#comment-14977409
 ] 

Chris Nauroth commented on HADOOP-11685:


[~onpduo], thanks for digging into that more and explaining the full flow.  I 
agree that the lease-holding operations will not trigger similar failures.  
This is almost ready to commit, pending another revision to address what I 
pointed out in the prior comment.  Thanks!

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.stor

[jira] [Commented] (HADOOP-12494) fetchdt stores the token based on token kind instead of token service

2015-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977420#comment-14977420
 ] 

Hudson commented on HADOOP-12494:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #605 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/605/])
HADOOP-12494. fetchdt stores the token based on token kind instead of (aw: rev 
1396867b52533ecf894158a464c6cd3abc7041b9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java


> fetchdt stores the token based on token kind instead of token service
> -
>
> Key: HADOOP-12494
> URL: https://issues.apache.org/jira/browse/HADOOP-12494
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
> Fix For: 3.0.0
>
> Attachments: HADOOP-12494, HADOOP-12494.patch
>
>
> The fetchdt command stores the token in a file. However, the key of token is 
> a token kind instead of a token service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-27 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch, 
> HADOOP-11685.06.patch, HADOOP-11685.07.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(Executi

  1   2   >