[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11772:
-
Attachment: after-ipc-fix.png

[~ajisakaa]: looks much better with the patch.

!after-ipc-fix.png!

I still see the occasional blocked getConnection(), but that's because I'm 
running 24 threads in parallel with 10 IPC Client instances.



> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, after-ipc-fix.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484889#comment-14484889
 ] 

Hadoop QA commented on HADOOP-11772:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723867/after-ipc-fix.png
  against trunk revision dd852f5.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6074//console

This message is automatically generated.

> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, after-ipc-fix.png, dfs-sync-ipc.png, 
> sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-11812:
--

 Summary: Implement listLocatedStatus for ViewFileSystem to speed 
up split calculation
 Key: HADOOP-11812
 URL: https://issues.apache.org/jira/browse/HADOOP-11812
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Description: ViewFileSystem is currently not taking advantage of 
MAPREDUCE-1981. This causes several x of RPC overhead and added latency.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Attachment: HADOOP-11812.001.patch

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Affects Version/s: 2.7.0
   Status: Patch Available  (was: Open)

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484960#comment-14484960
 ] 

Hadoop QA commented on HADOOP-11812:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723872/HADOOP-11812.001.patch
  against trunk revision dd852f5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewfsFileStatus

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.util.TestMachineList

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6075//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6075//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6075//console

This message is automatically generated.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485043#comment-14485043
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485055#comment-14485055
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485051#comment-14485051
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485128#comment-14485128
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485116#comment-14485116
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485115#comment-14485115
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485103#comment-14485103
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485124#comment-14485124
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-auth/pom.xml
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485111#comment-14485111
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485144#comment-14485144
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* hadoop-common-project/hadoop-auth/pom.xml
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485148#comment-14485148
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485136#comment-14485136
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485289#comment-14485289
 ] 

Eric Payne commented on HADOOP-11802:
-

Thanks [~cmccabe] for your comment and interest in this issue.

This problem is happening in multiple different live clusters. Only a small 
percentage of datanodes are affected each day, but once they hit this and the 
threads pile up, the datanodes must be restarted.

The only 'terminating on' message in the DN log is coming from 
DomainSocketWatchers unhandled exception handler. That is, it's the one 
documented in the description above:
{quote}
{noformat}
2015-04-04 13:12:31,059 [Thread-12] ERROR unix.DomainSocketWatcher: 
Thread[Thread-12,5,main] terminating on unexpected exception
java.lang.IllegalStateException: failed to remove 
17e33191fa8238098d7d22142f5787e2
2015-04-02 11:48:09,941 [DataXceiver for client 
unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
Thread[Thread-14,5,main] terminating on unexpected exception
java.lang.IllegalStateException: failed to remove 
b845649551b6b1eab5c17f630e42489d
...
{noformat}
{quote}
However, as you pointed out, that is happening after something went wrong in 
the main try block of the watcher thread. Since I'm seeing neither 'terminating 
on InterruptedException' nor 'terminating on IOException', there must be some 
other exception occurring. However, the only reference in the DN log of 
{{DomainSocketWatcher}} is in the stack trace already mentioned.

However, just above the IllegalStateException stacktrace is the following that 
indicated a premature EOF occurred. There were several of these, but it's not 
clear that they are related to the reason why the DomainSocketWatcher exited.
Your input would be greatly appreciated.
{noformat}
2015-04-02 11:48:09,885 [DataXceiver for client 
DFSClient_attempt_1427231924849_569467_m_000135_0_346288762_1 at 
/xxx.xxx.xxx.xxx:41908 [Receiving block 
BP-658831282-xxx.xxx.xxx.xxx-1351509219914:blk_3365919992_1105804585360]] ERROR 
datanode.DataNode: gsta70851.tan.ygrid.yahoo.com:1004:DataXceiver error 
processing WRITE_BLOCK operation  src: /xxx.xxx.xxx.xxx:41908 dst: 
/xxx.xxx.xxx.xxx:1004
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:467)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:722)
{noformat}

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for notificationSockets[0] to read a 
> byte
>   for (Entry entry : entries.values()) {
> // We do not remove from entries as we iterate, because that can
> // cause a ConcurrentModificationException.
> sendCallback("close", entries, fdSet, entry.getDomainSocket().fd);
>   }
>   entries.clear();
>   fdSet.close();
> } finally {
>   lock.unlock();
> }
>   }
> {code}
> The exception causes {{watcherThread}} to skip the calls to 
> {{entries.cle

[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485380#comment-14485380
 ] 

Eric Payne commented on HADOOP-11802:
-

Sorry, I just noticed that the following was the first exception in the series:
{noformat}
2015-04-02 11:48:09,866 [DataXceiver for client 
unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] ERROR 
datanode.DataNode: gsta70851.tan.ygrid.yahoo.com:1004:DataXceiver error 
processing REQUEST_SHORT_CIRCUIT_SHM operation  src: 
unix:/home/gs/var/run/hdfs/dn_socket dst: 
java.net.SocketException: write(2) error: Broken pipe
at org.apache.hadoop.net.unix.DomainSocket.writeArray0(Native Method)
at 
org.apache.hadoop.net.unix.DomainSocket.access$300(DomainSocket.java:45)
at 
org.apache.hadoop.net.unix.DomainSocket$DomainOutputStream.write(DomainSocket.java:601)
at 
com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)
at 
com.google.protobuf.CodedOutputStream.flush(CodedOutputStream.java:843)
at 
com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:91)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.sendShmSuccessResponse(DataXceiver.java:380)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:418)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
{noformat}


> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for notificationSockets[0] to read a 
> byte
>   for (Entry entry : entries.values()) {
> // We do not remove from entries as we iterate, because that can
> // cause a ConcurrentModificationException.
> sendCallback("close", entries, fdSet, entry.getDomainSocket().fd);
>   }
>   entries.clear();
>   fdSet.close();
> } finally {
>   lock.unlock();
> }
>   }
> {code}
> The exception causes {{watcherThread}} to skip the calls to 
> {{entries.clear()}} and {{fdSet.close()}}.
> {code}
> 2015-04-02 11:48:09,941 [DataXceiver for client 
> unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
> DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
> 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
> e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
> 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
> Thread[Thread-14,5,main] terminating on unexpected exception
> java.lang.IllegalStateException: failed to remove 
> b845649551b6b1eab5c17f630e42489d
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
> at java.lang.Thread.run(Thread.java:722)
> {code}
> Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
> HADOOP-10404. The cluster installation is running code with all of these 
> fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485508#comment-14485508
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

[~gkesavan], how do we test this on a jenkins instance?

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11781:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> fix race conditions and add URL support to smart-apply-patch.sh
> ---
>
> Key: HADOOP-11781
> URL: https://issues.apache.org/jira/browse/HADOOP-11781
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Raymie Stata
> Fix For: 3.0.0
>
> Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch, 
> HADOOP-11781-03.patch
>
>
> smart-apply-patch.sh has a few race conditions and is just generally crufty.  
> It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-04-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11813:
--
Labels: newbie  (was: )

> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: newbie
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-04-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11813:
-

 Summary: releasedocmaker.py should use today's date instead of 
unreleased
 Key: HADOOP-11813
 URL: https://issues.apache.org/jira/browse/HADOOP-11813
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Minor


After discussing with a few folks, it'd be more convenient if releasedocmaker 
used the current date rather than unreleased when processing a version that 
JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485560#comment-14485560
 ] 

Allen Wittenauer commented on HADOOP-11813:
---

Effectively:

add a flag called --usetoday.  If the version is unreleased and --usetoday is 
set, then use today's date.  The pom.xml file for -Preleasedocs also needs to 
have this added to the argument list.

> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: newbie
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485609#comment-14485609
 ] 

Chris Nauroth commented on HADOOP-11746:


As per discussion on common-dev, we'd like to be able to exit early from 
test-patch runs against an attachment if the file name doesn't match *.patch.  
This would cut down on spam from Jenkins trying to run test-patch on other 
kinds of attachments, like screenshots and design docs.

Another potential improvement on top of that would be to skip runs if the patch 
file name contains a branch name, because we only do pre-commit for trunk 
currently.

Just as a reminder, naming conventions for patch files are documented here:

https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch


> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485621#comment-14485621
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

bq.  Another potential improvement on top of that would be to skip runs if the 
patch file name contains a branch name, because we only do pre-commit for trunk 
currently.

This code actually can do pre-commit runs for non-trunk based upon the patch 
name...

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485629#comment-14485629
 ] 

Hudson commented on HADOOP-11781:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7532 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7532/])
HADOOP-11781. fix race conditions and add URL support to smart-apply-patch.sh 
(Raymie Stata via aw) (aw: rev f4b3fc56210824037344d403f1ad0f033961a2db)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/smart-apply-patch.sh


> fix race conditions and add URL support to smart-apply-patch.sh
> ---
>
> Key: HADOOP-11781
> URL: https://issues.apache.org/jira/browse/HADOOP-11781
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Raymie Stata
> Fix For: 3.0.0
>
> Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch, 
> HADOOP-11781-03.patch
>
>
> smart-apply-patch.sh has a few race conditions and is just generally crufty.  
> It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485689#comment-14485689
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/158/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485697#comment-14485697
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/158/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485701#comment-14485701
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/158/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485730#comment-14485730
 ] 

Hudson commented on HADOOP-11717:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2107 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2107/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-project/pom.xml


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485734#comment-14485734
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2107 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2107/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485722#comment-14485722
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2107 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2107/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11814) Reformat hadoop-annotations/RootDocProcessor

2015-04-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11814:
--

 Summary: Reformat hadoop-annotations/RootDocProcessor
 Key: HADOOP-11814
 URL: https://issues.apache.org/jira/browse/HADOOP-11814
 Project: Hadoop Common
  Issue Type: Task
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor


RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Summary: Reformat hadoop-annotations, o.a.h.classification.tools  (was: 
Reformat hadoop-annotations/RootDocProcessor)

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485894#comment-14485894
 ] 

Owen O'Malley commented on HADOOP-11746:


This looks like a nice improvement.

It would be great if we could name patch files like git.patch and have it 
apply to the git hash . That would let you upload patches for branches 
without worrying about conflicting changes.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Attachment: HADOOP-11814-040815.patch

Upload a patch to reformat all classes in o.a.h.classification.tools, replaced 
all tabs into spaces. 

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Status: Patch Available  (was: Open)

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485909#comment-14485909
 ] 

Haohui Mai commented on HADOOP-11814:
-

+1 pending Jenkins.

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486368#comment-14486368
 ] 

Hadoop QA commented on HADOOP-11815:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723608/0001-MAPREDUCE-6311.patch
  against trunk revision bd4c99b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6080//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6080//console

This message is automatically generated.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6311.patch, 0001-MAPREDUCE-6311.patch, 
> MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486488#comment-14486488
 ] 

Rohith commented on HADOOP-11815:
-

Updated the patch as per review comment. Verified manually in cluster, it is 
working fine.

Kindly review the patch.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-HADOOP-11815.patch, 0001-MAPREDUCE-6311.patch, 
> 0001-MAPREDUCE-6311.patch, MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486618#comment-14486618
 ] 

Hadoop QA commented on HADOOP-11812:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12724096/HADOOP-11812.003.patch
  against trunk revision dc0282d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6083//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6083//console

This message is automatically generated.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch, 
> HADOOP-11812.003.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11814:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gtCarrera9] for the 
contribution.

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Fix For: 2.8.0
>
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486133#comment-14486133
 ] 

Colin Patrick McCabe commented on HADOOP-11802:
---

It's clear to me that the proximate cause of the {{DomainSocketWatcher}} thread 
exiting on the {{DataNode}} is that it tried to remove a shared memory segment 
ID that was not registered.  But if I'm reading these stack traces right, the 
attempted removal is happening in the finally block-- a place where we should 
never actually be, except in unit tests.  That means  that there was another 
exception that triggered this whole problem.  Without knowing what that root 
cause is, I don't think we can get any farther on this.

I suggest adding another catch block here: 

{code}
doPoll0(interruptCheckPeriodMs, fdSet);
}
  } catch (InterruptedException e) {
LOG.info(toString() + " terminating on InterruptedException");
  } catch (IOException e) {
LOG.error(toString() + " terminating on IOException", e);
  } finally {
lock.lock();
 {code}

If we had a catch block catching {{RuntimeException}} and printing it out, that 
might give you the true root cause.

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for notificationSockets[0] to read a 
> byte
>   for (Entry entry : entries.values()) {
> // We do not remove from entries as we iterate, because that can
> // cause a ConcurrentModificationException.
> sendCallback("close", entries, fdSet, entry.getDomainSocket().fd);
>   }
>   entries.clear();
>   fdSet.close();
> } finally {
>   lock.unlock();
> }
>   }
> {code}
> The exception causes {{watcherThread}} to skip the calls to 
> {{entries.clear()}} and {{fdSet.close()}}.
> {code}
> 2015-04-02 11:48:09,941 [DataXceiver for client 
> unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
> DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
> 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
> e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
> 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
> Thread[Thread-14,5,main] terminating on unexpected exception
> java.lang.IllegalStateException: failed to remove 
> b845649551b6b1eab5c17f630e42489d
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
> at java.lang.Thread.run(Thread.java:722)
> {code}
> Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
> HADOOP-10404. The cluster installation is running code with all of these 
> fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486340#comment-14486340
 ] 

Hadoop QA commented on HADOOP-11815:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723608/0001-MAPREDUCE-6311.patch
  against trunk revision 265ed1f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6079//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6079//console

This message is automatically generated.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6311.patch, 0001-MAPREDUCE-6311.patch, 
> MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486456#comment-14486456
 ] 

Yi Liu commented on HADOOP-11789:
-

Colin, I'm OK to close it as WONTFIX. 
[~steve_l] and [~xyao], do you have comments? If not, I will close it.

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486661#comment-14486661
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

If I rename the patch file to be HADOOP-11746-12.git5449adc.patch, it provides 
this output in the JIRA message.  Note how it gives the 5449adc reference 
instead of trunk above.

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec | 00m 00s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch | 00m 00s | Pre-patch 5449adc compilation is 
healthy. |
| {color:red}-1{color} | @author | 00m 00s | The patch appears to contain 13 
@author tags which the Hadoop  community has agreed to not allow in code 
contributions. |
| {color:green}+1{color} | whitespace | 00m 00s | The patch has no   lines that 
end in whitespace. |
| {color:green}+1{color} | release audit | 00m 09s | The applied patch does not 
increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck | 00m 02s | There were no new shellcheck 
issues. |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests | shellcheck |
| git revision | 5449adc / 5449adc |
| Console output | /artifact/patchprocess/console |

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486426#comment-14486426
 ] 

Hadoop QA commented on HADOOP-11814:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723998/HADOOP-11814-040815.patch
  against trunk revision cc25823.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-annotations.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6081//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6081//console

This message is automatically generated.

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486234#comment-14486234
 ] 

Hadoop QA commented on HADOOP-11812:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12724032/HADOOP-11812.002.patch
  against trunk revision 265ed1f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6078//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6078//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6078//console

This message is automatically generated.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated HADOOP-11815:

Attachment: 0001-HADOOP-11815.patch

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-HADOOP-11815.patch, 0001-MAPREDUCE-6311.patch, 
> 0001-MAPREDUCE-6311.patch, MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486267#comment-14486267
 ] 

Gera Shegalov commented on HADOOP-11812:


Test failure is unrelated. I will update the patch to satisfy FindBugs but 
path-based equality defined in FileStatus is actually sufficient.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486609#comment-14486609
 ] 

Hadoop QA commented on HADOOP-11815:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12724093/0001-HADOOP-11815.patch
  against trunk revision dc0282d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6082//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6082//console

This message is automatically generated.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-HADOOP-11815.patch, 0001-MAPREDUCE-6311.patch, 
> 0001-MAPREDUCE-6311.patch, MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486003#comment-14486003
 ] 

Colin Patrick McCabe commented on HADOOP-11789:
---

bq. Colin, if -Pnative is set, but the os doesn't have a correct version of 
OpenSSL, then we should make the test failed.

Yes.  Absolutely.

We've had some very subtle bugs earlier where we thought we were testing the 
openssl integration, but we actually were not. This is because there was a very 
subtle difference in the version of openssl installed on the jenkins machine 
and the one we needed.

I think the highest priority here is to make sure our Jenkins coverage doesn't 
regress again.  If people running tests locally want to skip the native tests, 
they can simply run the test suite without {{\-Pnative}}, or skip running the 
openssl test.

I think we should just close this as WONTFIX, what do you think?

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486660#comment-14486660
 ] 

Allen Wittenauer edited comment on HADOOP-11746 at 4/9/15 4:27 AM:
---

-12:
* add --resetrepo to simulate jenkins git repo nuker in developer mode
* don't send a jira message and abort early if the URL doesn't end in .patch
* Allow git(8 chars) or git(41 chars) as branch names for patch testing
* make it explicit in the report output if we were in branch or git ref mode
* fix some issues with checkstyle
* when using --resetrepo mode, shellcheck would erroneously flag .orig and .rej 
files from previous patches since those aren't cleared by git due to .gitignore
* fix a few shellcheck warnings I had missed. (shellcheck is still mostly 
ignoring dev-support. should probably fix that.)
* findbugs now gives a summary report on the console/jira output rather than 
forcing folks to look at the full findbugs output (altho that is still listed 
too!)
* set the TIMER default at startup to be current time rather 0 to prevent 
incredibly bogus, decades long runtimes in case of early aborts
* re-order the flags in the options parser
* fix an issue with the test heuristics so that external plugins aren't 
prematurely ignored

Testing itself, it says:

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec | 00m 00s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch | 00m 00s | Pre-patch trunk compilation is 
healthy. |
| {color:red}-1{color} | @author | 00m 00s | The patch appears to contain 13 
@author tags which the Hadoop  community has agreed to not allow in code 
contributions. |
| {color:green}+1{color} | whitespace | 00m 00s | The patch has no   lines that 
end in whitespace. |
| {color:green}+1{color} | release audit | 00m 09s | The applied patch does not 
increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck | 00m 02s | There were no new shellcheck 
issues. |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests | shellcheck |
| git revision | f4b3fc5 / trunk |
| Console output | /artifact/patchprocess/console |


This message was automatically generated.


was (Author: aw):
-12:
* add --resetrepo to simulate jenkins git repo nuker in developer mode
* don't send a jira message and abort early if the URL doesn't end in .patch
* Allow git(8 chars) or git(41 chars) as branch names for patch testing
* make it explicit in the report output if we were in branch or git ref mode
* fix some issues with checkstyle
* when using --resetrepo mode, shellcheck would erroneously flag .orig and .rej 
files from previous patches since those aren't cleared by git due to .gitignore
* fix a few shellcheck warnings I had missed
* findbugs now gives a summary report on the console/jira output rather than 
forcing folks to look at the full findbugs output (altho that is still listed 
too!)
* set the TIMER to be current time rather 0 to prevent incredibly bogus, 
decades long runtimes in case of early aborts
* re-order the flags in the options parser
* fix an issue with the test heuristics so that external plugins aren't 
prematurely ignored

Testing itself, it says:

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec | 00m 00s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch | 00m 00s | Pre-patch trunk compilation is 
healthy. |
| {color:red}-1{color} | @author | 00m 00s | The patch appears to contain 13 
@author tags which the Hadoop  community has agreed to not allow in code 
contributions. |
| {color:green}+1{color} | whitespace | 00m 00s | The patch has no   lines that 
end in whitespace. |
| {color:green}+1{color} | release audit | 00m 09s | The applied patch does not 
increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck | 00m 02s | There were no new shellcheck 
issues. |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests | shellcheck |
| git revision | f4b3fc5 / trunk |
| Console output | /artifact/patchprocess/console |


This message was automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.

[jira] [Comment Edited] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486155#comment-14486155
 ] 

Colin Patrick McCabe edited comment on HADOOP-11802 at 4/8/15 10:18 PM:


I thought about this a little bit more, and I wonder whether this finally block 
inside requestShortCircuitShm is causing a "double removal":

{code}
  public void requestShortCircuitShm(String clientName) throws IOException {
 
NewShmInfo shmInfo = null;  
 
boolean success = false;
 
DomainSocket sock = peer.getDomainSocket(); 
 
try {   
 
...
} finally { 
 
...
  if ((!success) && (peer == null)) {
// If we failed to pass the shared memory segment to the client,
 
// close the UNIX domain socket now.  This will trigger the 
 
// DomainSocketWatcher callback, cleaning up the segment.   
 
IOUtils.cleanup(null, sock);
 
  }
  IOUtils.cleanup(null, shmInfo);   
 
}   
 
{code}

Closing the socket will remove that shmID, but so will closing the NewShmInfo 
object... let me look into this.

[edit: NewShmInfo#close just closes the shared memory segment, but not the 
domain socket.  Since DomainSocketWatcher is watching the domain socket rather 
than the shm fd, doing both close operations should not be a problem.]


was (Author: cmccabe):
I thought about this a little bit more, and I wonder whether this finally block 
inside requestShortCircuitShm is causing a "double removal":

{code}
  public void requestShortCircuitShm(String clientName) throws IOException {
 
NewShmInfo shmInfo = null;  
 
boolean success = false;
 
DomainSocket sock = peer.getDomainSocket(); 
 
try {   
 
...
} finally { 
 
...
  if ((!success) && (peer == null)) {
// If we failed to pass the shared memory segment to the client,
 
// close the UNIX domain socket now.  This will trigger the 
 
// DomainSocketWatcher callback, cleaning up the segment.   
 
IOUtils.cleanup(null, sock);
 
  }
  IOUtils.cleanup(null, shmInfo);   
 
}   
 
{code}

Closing the socket will remove that shmID, but so will closing the NewShmInfo 
object... let me look into this.

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for not

[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486701#comment-14486701
 ] 

Allen Wittenauer commented on HADOOP-11746:
---


 Console summary for previous Java example:

{noformat}


-1 overall

| Vote |   Subsystem |  Runtime  | Comment
|   0  |  pre-patch  |  08m 44s  | Pre-patch trunk compilation is 
|  | |   | healthy.
|  +1  |@author  |  00m 00s  | The patch does not contain any 
|  | |   | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
|  | |   | any new or modified tests. Please
|  | |   | justify why no new tests are needed
|  | |   | for this patch. Also please list what
|  | |   | manual steps were performed to verify
|  | |   | this patch.
|  -1  | whitespace  |  00m 00s  | The patch has 1 lines that end in 
|  | |   | whitespace.
|  +1  |  javac  |  04m 09s  | There were no new javac warning 
|  | |   | messages.
|  +1  |javadoc  |  05m 41s  | There were no new javadoc warning 
|  | |   | messages.
|  +1  |  release audit  |  00m 11s  | The applied patch does not increase 
|  | |   | the total number of release audit
|  | |   | warnings.
|  +1  | checkstyle  |  03m 22s  | There were no new checkstyle issues. 
|  +1  |install  |  01m 00s  | mvn install still works. 
|  +1  |eclipse:eclipse  |  00m 22s  | The patch built with 
|  | |   | eclipse:eclipse.
|  -1  |   findbugs  |  00m 51s  | The patch appears to introduce 2 new 
|  | |   | Findbugs (version 3.0.1) warnings.
|  -1  | core tests  |  01m 20s  | Tests failed in 
|  | |   | 
hadoop-yarn-project/hadoop-yarn/hadoop
|  | |   | -yarn-common.


 Reason | Tests
  FindBugs  |  FindBugs 
|  Unread public/protected field:At 
Log4jWarningErrorMetricsAppender.java:[line 44] 
  FindBugs  |  FindBugs 
|  Unread public/protected field:At 
Log4jWarningErrorMetricsAppender.java:[line 45] 
 Failed unit tests  |  Failed unit tests 
|  hadoop.yarn.util.TestLog4jWarningErrorMetricsAppender 


|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12723637/MAPREDUCE-6301-002.patch
 |
| Optional Tests | javadoc javac unit checkstyle |
| git revision | f4b3fc5 / trunk |
| whitespace | /tmp/hadoop-test-patch/75950/whitespace.txt |
| Findbugs warnings | 
/tmp/hadoop-test-patch/75950/newPatchFindbugsWarningshadoop-yarn-common.html |
| hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common test log | 
/tmp/hadoop-test-patch/75950/testrun_hadoop-yarn-common.txt |
| Test Results | /testReport/ |
{noformat}

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Attachment: HADOOP-11812.002.patch

002 to fix the bug caught by TestViewfsStatus

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Attachment: HADOOP-11812.002.patch

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486003#comment-14486003
 ] 

Colin Patrick McCabe edited comment on HADOOP-11789 at 4/8/15 8:52 PM:
---

bq. Colin, if -Pnative is set, but the os doesn't have a correct version of 
OpenSSL, then we should make the test failed?

Yes.  Absolutely.

We've had some very subtle bugs earlier where we thought we were testing the 
openssl integration, but we actually were not. This is because there was a very 
subtle difference in the version of openssl installed on the jenkins machine 
and the one we needed.

I think the highest priority here is to make sure our Jenkins coverage doesn't 
regress again.  If people running tests locally want to skip the native tests, 
they can simply run the test suite without {{\-Pnative}}, or skip running the 
openssl test.

I think we should just close this as WONTFIX, what do you think?


was (Author: cmccabe):
bq. Colin, if -Pnative is set, but the os doesn't have a correct version of 
OpenSSL, then we should make the test failed.

Yes.  Absolutely.

We've had some very subtle bugs earlier where we thought we were testing the 
openssl integration, but we actually were not. This is because there was a very 
subtle difference in the version of openssl installed on the jenkins machine 
and the one we needed.

I think the highest priority here is to make sure our Jenkins coverage doesn't 
regress again.  If people running tests locally want to skip the native tests, 
they can simply run the test suite without {{\-Pnative}}, or skip running the 
openssl test.

I think we should just close this as WONTFIX, what do you think?

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Attachment: HADOOP-11812.003.patch

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch, 
> HADOOP-11812.003.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486201#comment-14486201
 ] 

Junping Du commented on HADOOP-11815:
-

Let's move it to Hadoop project given bug and fix are all happen in Hadoop 
side. Will kick off Jenkins test again.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6311.patch, 0001-MAPREDUCE-6311.patch, 
> MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486711#comment-14486711
 ] 

Andrew Wang commented on HADOOP-11789:
--

I agree with Colin, this is very intentional behavior we introduced in 
HADOOP-11711. I'd also be amenable to a change that skips this if -Pnative is 
not specified, but the NPE on Jenkins needs to be fixed on the Jenkins side.

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486687#comment-14486687
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

For comparison, I ran MAPREDUCE-6301 through it which has some known issues 
just so everyone can see what a Java failure looks like.  It downloaded the 
current patch (MR-6301-002) and gave this output:

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 08m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author | 00m 00s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included | 00m 00s | The patch doesn't appear to 
include any new or modified tests.  Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. |
| {color:red}-1{color} | whitespace | 00m 00s | The patch has 1  lines that end 
in whitespace. |
| {color:green}+1{color} | javac | 04m 09s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc | 05m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit | 00m 11s | The applied patch does not 
increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle | 03m 22s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install | 01m 00s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 00m 22s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs | 00m 51s | The patch appears to introduce 2 
new Findbugs (version 3.0.1) warnings. |
| {color:red}-1{color} | core tests | 01m 20s | Tests failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common. |
\\
\\
|| Reason || Tests ||
| FindBugs | FindBugs |
|   |  Unread public/protected field:At 
Log4jWarningErrorMetricsAppender.java:[line 44] |
| FindBugs | FindBugs |
|   |  Unread public/protected field:At 
Log4jWarningErrorMetricsAppender.java:[line 45] |
| Failed unit tests | Failed unit tests |
|   | hadoop.yarn.util.TestLog4jWarningErrorMetricsAppender |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12723637/MAPREDUCE-6301-002.patch
 |
| Optional Tests | javadoc javac unit checkstyle |
| git revision | f4b3fc5 / trunk |
| whitespace | /artifact/patchprocess/whitespace.txt |
| Findbugs warnings | 
/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html |
| hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common test log | 
/artifact/patchprocess/testrun_hadoop-yarn-common.txt |
| Test Results | /testReport/ |
| Console output | /artifact/patchprocess/console |


This message was automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486257#comment-14486257
 ] 

Hadoop QA commented on HADOOP-11812:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12724028/HADOOP-11812.002.patch
  against trunk revision 265ed1f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6077//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6077//console

This message is automatically generated.

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486255#comment-14486255
 ] 

Haohui Mai commented on HADOOP-11815:
-

The general approach looks good to me. A cleaner way is to store 
{{SignerSecretProvider}} in a final field in {{HttpServer2}} and to call 
{{destroy()}} on the field.

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6311.patch, 0001-MAPREDUCE-6311.patch, 
> MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486690#comment-14486690
 ] 

Hadoop QA commented on HADOOP-11746:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724125/HADOOP-11746-12.patch
  against trunk revision dc0282d.

{color:red}-1 @author{color}.  The patch appears to contain 13 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6084//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6084//console

This message is automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-04-08 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11812:
---
Attachment: (was: HADOOP-11812.002.patch)

> Implement listLocatedStatus for ViewFileSystem to speed up split calculation
> 
>
> Key: HADOOP-11812
> URL: https://issues.apache.org/jira/browse/HADOOP-11812
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>  Labels: performance
> Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch
>
>
> ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
> causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486493#comment-14486493
 ] 

Hudson commented on HADOOP-11814:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7542 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7542/])
HADOOP-11814. Reformat hadoop-annotations, o.a.h.classification.tools. 
Contributed by Li Lu. (wheat9: rev dc0282d64c6528b02aa9f2df49be01223f087081)
* 
hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
* 
hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java
* 
hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsJDiffDoclet.java
* 
hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Fix For: 2.8.0
>
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486303#comment-14486303
 ] 

Haohui Mai commented on HADOOP-11776:
-

I believe it creates a lot of noises on the HDFS side. It does not 
differentiate between public and private APIs.

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11776-040115.patch
>
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-08 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11789:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Thanks Colin and Andrew for the comments, close it as WON'T FIX.

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486155#comment-14486155
 ] 

Colin Patrick McCabe commented on HADOOP-11802:
---

I thought about this a little bit more, and I wonder whether this finally block 
inside requestShortCircuitShm is causing a "double removal":

{code}
  public void requestShortCircuitShm(String clientName) throws IOException {
 
NewShmInfo shmInfo = null;  
 
boolean success = false;
 
DomainSocket sock = peer.getDomainSocket(); 
 
try {   
 
...
} finally { 
 
...
  if ((!success) && (peer == null)) {
// If we failed to pass the shared memory segment to the client,
 
// close the UNIX domain socket now.  This will trigger the 
 
// DomainSocketWatcher callback, cleaning up the segment.   
 
IOUtils.cleanup(null, sock);
 
  }
  IOUtils.cleanup(null, shmInfo);   
 
}   
 
{code}

Closing the socket will remove that shmID, but so will closing the NewShmInfo 
object... let me look into this.

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for notificationSockets[0] to read a 
> byte
>   for (Entry entry : entries.values()) {
> // We do not remove from entries as we iterate, because that can
> // cause a ConcurrentModificationException.
> sendCallback("close", entries, fdSet, entry.getDomainSocket().fd);
>   }
>   entries.clear();
>   fdSet.close();
> } finally {
>   lock.unlock();
> }
>   }
> {code}
> The exception causes {{watcherThread}} to skip the calls to 
> {{entries.clear()}} and {{fdSet.close()}}.
> {code}
> 2015-04-02 11:48:09,941 [DataXceiver for client 
> unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
> DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
> 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
> e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
> 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
> Thread[Thread-14,5,main] terminating on unexpected exception
> java.lang.IllegalStateException: failed to remove 
> b845649551b6b1eab5c17f630e42489d
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
> at java.lang.Thread.run(Thread.java:722)
> {code}
> Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
> HADOOP-10404. The cluster installation is running code with all of these 
> fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486155#comment-14486155
 ] 

Colin Patrick McCabe edited comment on HADOOP-11802 at 4/8/15 10:18 PM:


I thought about this a little bit more, and I wonder whether this finally block 
inside requestShortCircuitShm is causing a "double removal":

{code}
  public void requestShortCircuitShm(String clientName) throws IOException {
 
NewShmInfo shmInfo = null;  
 
boolean success = false;
 
DomainSocket sock = peer.getDomainSocket(); 
 
try {   
 
...
} finally { 
 
...
  if ((!success) && (peer == null)) {
// If we failed to pass the shared memory segment to the client,
 
// close the UNIX domain socket now.  This will trigger the 
 
// DomainSocketWatcher callback, cleaning up the segment.   
 
IOUtils.cleanup(null, sock);
 
  }
  IOUtils.cleanup(null, shmInfo);   
 
}   
 
{code}

Closing the socket will remove that shmID, but so will closing the NewShmInfo 
object... let me look into this.

edit: NewShmInfo#close just closes the shared memory segment, but not the 
domain socket.  Since DomainSocketWatcher is watching the domain socket rather 
than the shm fd, doing both close operations should not be a problem.  So I 
would still recommend adding the catch block and seeing what that tells us.


was (Author: cmccabe):
I thought about this a little bit more, and I wonder whether this finally block 
inside requestShortCircuitShm is causing a "double removal":

{code}
  public void requestShortCircuitShm(String clientName) throws IOException {
 
NewShmInfo shmInfo = null;  
 
boolean success = false;
 
DomainSocket sock = peer.getDomainSocket(); 
 
try {   
 
...
} finally { 
 
...
  if ((!success) && (peer == null)) {
// If we failed to pass the shared memory segment to the client,
 
// close the UNIX domain socket now.  This will trigger the 
 
// DomainSocketWatcher callback, cleaning up the segment.   
 
IOUtils.cleanup(null, sock);
 
  }
  IOUtils.cleanup(null, shmInfo);   
 
}   
 
{code}

Closing the socket will remove that shmID, but so will closing the NewShmInfo 
object... let me look into this.

[edit: NewShmInfo#close just closes the shared memory segment, but not the 
domain socket.  Since DomainSocketWatcher is watching the domain socket rather 
than the shm fd, doing both close operations should not be a problem.]

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne

[jira] [Moved] (HADOOP-11815) AM JVM hungs after job unregister and finished

2015-04-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du moved MAPREDUCE-6311 to HADOOP-11815:


Affects Version/s: (was: 2.7.0)
   2.7.0
  Key: HADOOP-11815  (was: MAPREDUCE-6311)
  Project: Hadoop Common  (was: Hadoop Map/Reduce)

> AM JVM hungs after job unregister and finished
> --
>
> Key: HADOOP-11815
> URL: https://issues.apache.org/jira/browse/HADOOP-11815
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Rohith
>Assignee: Rohith
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6311.patch, 0001-MAPREDUCE-6311.patch, 
> MR_TD.out
>
>
> It is observed that MRAppMaster JVM hungs after unregistered with 
> ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11816) Remove Apache Xerces dependency

2015-04-08 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11816:
--

 Summary: Remove Apache Xerces dependency
 Key: HADOOP-11816
 URL: https://issues.apache.org/jira/browse/HADOOP-11816
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA


Apache Xerces is no longer used in trunk and branch-2 but the dependency 
exists. It can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-04-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486204#comment-14486204
 ] 

Junping Du commented on HADOOP-11754:
-

Hi [~wheat9], seems like another issue related to this patch HADOOP-11815. Will 
you take a look at the quick fix there? Thanks!

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch, 
> HADOOP-11754.003.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceMana

[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-12.patch

-12:
* add --resetrepo to simulate jenkins git repo nuker in developer mode
* don't send a jira message and abort early if the URL doesn't end in .patch
* Allow git(8 chars) or git(41 chars) as branch names for patch testing
* make it explicit in the report output if we were in branch or git ref mode
* fix some issues with checkstyle
* when using --resetrepo mode, shellcheck would erroneously flag .orig and .rej 
files from previous patches since those aren't cleared by git due to .gitignore
* fix a few shellcheck warnings I had missed
* findbugs now gives a summary report on the console/jira output rather than 
forcing folks to look at the full findbugs output (altho that is still listed 
too!)
* set the TIMER to be current time rather 0 to prevent incredibly bogus, 
decades long runtimes in case of early aborts
* re-order the flags in the options parser
* fix an issue with the test heuristics so that external plugins aren't 
prematurely ignored

Testing itself, it says:

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec | 00m 00s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch | 00m 00s | Pre-patch trunk compilation is 
healthy. |
| {color:red}-1{color} | @author | 00m 00s | The patch appears to contain 13 
@author tags which the Hadoop  community has agreed to not allow in code 
contributions. |
| {color:green}+1{color} | whitespace | 00m 00s | The patch has no   lines that 
end in whitespace. |
| {color:green}+1{color} | release audit | 00m 09s | The applied patch does not 
increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck | 00m 02s | There were no new shellcheck 
issues. |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests | shellcheck |
| git revision | f4b3fc5 / trunk |
| Console output | /artifact/patchprocess/console |


This message was automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch, 
> HADOOP-11746-12.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)