[jira] [Commented] (HADOOP-11220) Jenkins should verify "mvn site" if the patch contains *.apt.vm changes

2014-10-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181105#comment-14181105
 ] 

Steve Loughran commented on HADOOP-11220:
-

add .md checks too

> Jenkins should verify "mvn site" if the patch contains *.apt.vm changes
> ---
>
> Key: HADOOP-11220
> URL: https://issues.apache.org/jira/browse/HADOOP-11220
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhijie Shen
>
> It's should be good to make Jenkins verify "mvn site" if the patch contains 
> *.apt.vm changes to avoid some obvious build failure, such as YARN-2732.
> It's not the first time that the similar issues have been raised. Having an 
> automative verification can inform us an alert before us encounter an actual 
> build failure which involves "site" lifecycle.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11170:
-
Attachment: HADOOP-11170.4.patch

Updated patch :
* Changed some variable names
* Try fix failing test case by guaranteeing node is deleted (It passes locally)
 

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181179#comment-14181179
 ] 

Hadoop QA commented on HADOOP-11170:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676549/HADOOP-11170.4.patch
  against trunk revision d71d40a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4936//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4936//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4936//console

This message is automatically generated.

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181186#comment-14181186
 ] 

Arun Suresh commented on HADOOP-11170:
--

The test case failure is not related to this patch.
The remaining findBugs errors are also unrelated to the patch.

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11122) Fix findbugs in ZK DelegationTokenSecretManagers

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181254#comment-14181254
 ] 

Hudson commented on HADOOP-11122:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #721 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/721/])
HADOOP-11122. Fix findbugs in ZK DelegationTokenSecretManagers. (Arun Suresh 
via kasha) (kasha: rev 70719e5c62f32836914bea88e1ddd99c0ed009e1)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java


> Fix findbugs in ZK DelegationTokenSecretManagers 
> -
>
> Key: HADOOP-11122
> URL: https://issues.apache.org/jira/browse/HADOOP-11122
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HADOOP-11122.1.patch, HADOOP-11122.2.patch, 
> HADOOP-11122.3.patch, HADOOP-11122.4.patch, HADOOP-11122.4.patch, 
> HADOOP-11122.5.patch
>
>
> HADOOP-11017 adds ZK implementation for DelegationTokenSecretManager. This is 
> a follow-up JIRA to address review comments there - findbugs and order of 
> updates to the {{currentKey}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11122) Fix findbugs in ZK DelegationTokenSecretManagers

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181380#comment-14181380
 ] 

Hudson commented on HADOOP-11122:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1910 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1910/])
HADOOP-11122. Fix findbugs in ZK DelegationTokenSecretManagers. (Arun Suresh 
via kasha) (kasha: rev 70719e5c62f32836914bea88e1ddd99c0ed009e1)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix findbugs in ZK DelegationTokenSecretManagers 
> -
>
> Key: HADOOP-11122
> URL: https://issues.apache.org/jira/browse/HADOOP-11122
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HADOOP-11122.1.patch, HADOOP-11122.2.patch, 
> HADOOP-11122.3.patch, HADOOP-11122.4.patch, HADOOP-11122.4.patch, 
> HADOOP-11122.5.patch
>
>
> HADOOP-11017 adds ZK implementation for DelegationTokenSecretManager. This is 
> a follow-up JIRA to address review comments there - findbugs and order of 
> updates to the {{currentKey}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11122) Fix findbugs in ZK DelegationTokenSecretManagers

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181442#comment-14181442
 ] 

Hudson commented on HADOOP-11122:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1935 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1935/])
HADOOP-11122. Fix findbugs in ZK DelegationTokenSecretManagers. (Arun Suresh 
via kasha) (kasha: rev 70719e5c62f32836914bea88e1ddd99c0ed009e1)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


> Fix findbugs in ZK DelegationTokenSecretManagers 
> -
>
> Key: HADOOP-11122
> URL: https://issues.apache.org/jira/browse/HADOOP-11122
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HADOOP-11122.1.patch, HADOOP-11122.2.patch, 
> HADOOP-11122.3.patch, HADOOP-11122.4.patch, HADOOP-11122.4.patch, 
> HADOOP-11122.5.patch
>
>
> HADOOP-11017 adds ZK implementation for DelegationTokenSecretManager. This is 
> a follow-up JIRA to address review comments there - findbugs and order of 
> updates to the {{currentKey}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11177) Reduce tar ball size for MR over distributed cache

2014-10-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181537#comment-14181537
 ] 

Junping Du commented on HADOOP-11177:
-

Two differences for tar ball in dist-reduced project (comparing with original 
dist project):
- Reduced size (include library under /share only)
- Remove version number after unpack
That is more convenient for deploying MR over distributed cache.

> Reduce tar ball size for MR over distributed cache
> --
>
> Key: HADOOP-11177
> URL: https://issues.apache.org/jira/browse/HADOOP-11177
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-11177.patch
>
>
> The current tar ball built from "mvn package -Pdist -DskipTests -Dtar" is 
> over 160M in size. We need more smaller tar ball pieces for feature like MR 
> over distributed cache to support Rolling update of cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-23 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-11217:
---
Status: Open  (was: Patch Available)

I was accidentally testing with Java 7 instead of 6.  It turns out that the 
documentation was wrong and Java 6 only supports TLSv1, not TLSv1.1 (a 
different documentation page I just found supports this).

> Disable SSLv3 (POODLEbleed vulnerability) in KMS
> 
>
> Key: HADOOP-11217
> URL: https://issues.apache.org/jira/browse/HADOOP-11217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HADOOP-11217.patch
>
>
> We should disable SSLv3 in KMS to protect against the POODLEbleed 
> vulnerability.
> See 
> [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
> We have {{sslProtocol="TLS"}} set to only allow TLS in ssl-server.xml, but 
> when I checked, I could still connect with SSLv3.  There documentation is 
> somewhat unclear in the tomcat configs between {{sslProtocol}}, 
> {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
> does exactly.  From what I can gather, {{sslProtocol="TLS"}} actually 
> includes SSLv3 and the only way to fix this is to explicitly list which TLS 
> versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-23 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-11217:
---
Attachment: HADOOP-11217.patch

New patch only sets "TLSv1"

> Disable SSLv3 (POODLEbleed vulnerability) in KMS
> 
>
> Key: HADOOP-11217
> URL: https://issues.apache.org/jira/browse/HADOOP-11217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HADOOP-11217.patch, HADOOP-11217.patch
>
>
> We should disable SSLv3 in KMS to protect against the POODLEbleed 
> vulnerability.
> See 
> [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
> We have {{sslProtocol="TLS"}} set to only allow TLS in ssl-server.xml, but 
> when I checked, I could still connect with SSLv3.  There documentation is 
> somewhat unclear in the tomcat configs between {{sslProtocol}}, 
> {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
> does exactly.  From what I can gather, {{sslProtocol="TLS"}} actually 
> includes SSLv3 and the only way to fix this is to explicitly list which TLS 
> versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-23 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-11217:
---
Status: Patch Available  (was: Open)

> Disable SSLv3 (POODLEbleed vulnerability) in KMS
> 
>
> Key: HADOOP-11217
> URL: https://issues.apache.org/jira/browse/HADOOP-11217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HADOOP-11217.patch, HADOOP-11217.patch
>
>
> We should disable SSLv3 in KMS to protect against the POODLEbleed 
> vulnerability.
> See 
> [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
> We have {{sslProtocol="TLS"}} set to only allow TLS in ssl-server.xml, but 
> when I checked, I could still connect with SSLv3.  There documentation is 
> somewhat unclear in the tomcat configs between {{sslProtocol}}, 
> {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
> does exactly.  From what I can gather, {{sslProtocol="TLS"}} actually 
> includes SSLv3 and the only way to fix this is to explicitly list which TLS 
> versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11218) Add TLSv1.1,TLSv1.2 to KMS

2014-10-23 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-11218:
---
Description: HADOOP-11217 required us to specifically list the versions of 
TLS that KMS supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 
supporting TLSv1.1 and TLSv1.2, we should add them to the list.  (was: 
HADOOP-11217 required us to specifically list the versions of TLS that KMS 
supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
TLSv1.2, we should add that to the list.)
Summary: Add TLSv1.1,TLSv1.2 to KMS  (was: Add TLSv1.2 to KMS)

> Add TLSv1.1,TLSv1.2 to KMS
> --
>
> Key: HADOOP-11218
> URL: https://issues.apache.org/jira/browse/HADOOP-11218
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0
>Reporter: Robert Kanter
>
> HADOOP-11217 required us to specifically list the versions of TLS that KMS 
> supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
> TLSv1.1 and TLSv1.2, we should add them to the list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)
Jinghui Wang created HADOOP-11221:
-

 Summary: JAVA specification for hashcode does not enforce it to be 
non-negative, but IdentityHashStore assumes System.identityHashCode() is 
non-negative
 Key: HADOOP-11221
 URL: https://issues.apache.org/jira/browse/HADOOP-11221
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.4.1
Reporter: Jinghui Wang
Assignee: Jinghui Wang


The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code:title=Bar.java|borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Description: 
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}

  was:
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code:title=Bar.java|borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}


> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Description: 
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code title=IdentityHashStore|borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}

  was:
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}


> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code title=IdentityHashStore|borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Description: 
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code:borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}

  was:
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code title=IdentityHashStore|borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}


> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Description: 
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code:borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
  private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
  }
{code}

  was:
The following code snippet shows that IdentityHashStore assumes the hashCode is 
always non-negative.

{code:borderStyle=solid}
   private void putInternal(Object k, Object v) {
 int hash = System.identityHashCode(k);
 final int numEntries = buffer.length / 2;
 int index = hash % numEntries;
 ...
   }
   
private int getElementIndex(K k) {
 ...
 final int numEntries = buffer.length / 2;
 int hash = System.identityHashCode(k);
 int index = hash % numEntries;
 int firstIndex = index;
 ...
}
{code}


> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181706#comment-14181706
 ] 

Jinghui Wang commented on HADOOP-11221:
---

This causes failures with IBM JAVA where System.identityHashCode can return 
negative values. Attached patch fixes the problem.

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: HADOOP-11221.patch

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: HADOOP-11221.patch

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: (was: HADOOP-11221.patch)

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Status: Patch Available  (was: Open)

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181760#comment-14181760
 ] 

Karthik Kambatla commented on HADOOP-11170:
---

Looks mostly good. One nit: can we minimize direct accesses to ADTSM#currentId? 

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181766#comment-14181766
 ] 

Hadoop QA commented on HADOOP-11217:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676653/HADOOP-11217.patch
  against trunk revision d71d40a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4937//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4937//console

This message is automatically generated.

> Disable SSLv3 (POODLEbleed vulnerability) in KMS
> 
>
> Key: HADOOP-11217
> URL: https://issues.apache.org/jira/browse/HADOOP-11217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HADOOP-11217.patch, HADOOP-11217.patch
>
>
> We should disable SSLv3 in KMS to protect against the POODLEbleed 
> vulnerability.
> See 
> [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
> We have {{sslProtocol="TLS"}} set to only allow TLS in ssl-server.xml, but 
> when I checked, I could still connect with SSLv3.  There documentation is 
> somewhat unclear in the tomcat configs between {{sslProtocol}}, 
> {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
> does exactly.  From what I can gather, {{sslProtocol="TLS"}} actually 
> includes SSLv3 and the only way to fix this is to explicitly list which TLS 
> versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181811#comment-14181811
 ] 

Hadoop QA commented on HADOOP-11221:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676663/HADOOP-11221.patch
  against trunk revision d71d40a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4938//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4938//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4938//console

This message is automatically generated.

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11170:
-
Attachment: HADOOP-11170.5.patch

Thanks for the review [~kasha]

Updating patch with you suggestion and appended test case to verify renewToken 
and cancelToken works in multi-instance case..

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Arun Suresh (JIRA)
Arun Suresh created HADOOP-11222:


 Summary: Fix findBugs error in SpanReceiverHost
 Key: HADOOP-11222
 URL: https://issues.apache.org/jira/browse/HADOOP-11222
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun Suresh
Priority: Minor


Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11222:
-
Attachment: HADOOP-11222.1.patch

Uploading fix

> Fix findBugs error in SpanReceiverHost
> --
>
> Key: HADOOP-11222
> URL: https://issues.apache.org/jira/browse/HADOOP-11222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-11222.1.patch
>
>
> Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11222:
-
Status: Patch Available  (was: Open)

> Fix findBugs error in SpanReceiverHost
> --
>
> Key: HADOOP-11222
> URL: https://issues.apache.org/jira/browse/HADOOP-11222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-11222.1.patch
>
>
> Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181926#comment-14181926
 ] 

Hadoop QA commented on HADOOP-11222:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676709/HADOOP-11222.1.patch
  against trunk revision 7ab7545.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4940//console

This message is automatically generated.

> Fix findBugs error in SpanReceiverHost
> --
>
> Key: HADOOP-11222
> URL: https://issues.apache.org/jira/browse/HADOOP-11222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-11222.1.patch
>
>
> Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181946#comment-14181946
 ] 

Arun Suresh commented on HADOOP-11222:
--

looks like this has just been fixed : HDFS-7227
Closing this..

> Fix findBugs error in SpanReceiverHost
> --
>
> Key: HADOOP-11222
> URL: https://issues.apache.org/jira/browse/HADOOP-11222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-11222.1.patch
>
>
> Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11222) Fix findBugs error in SpanReceiverHost

2014-10-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11222:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Fix findBugs error in SpanReceiverHost
> --
>
> Key: HADOOP-11222
> URL: https://issues.apache.org/jira/browse/HADOOP-11222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-11222.1.patch
>
>
> Trivial patch to fix findBugs warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10690) Lack of synchronization on access to InputStream in NativeAzureFileSystem#NativeAzureFsInputStream#close()

2014-10-23 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181960#comment-14181960
 ] 

Chen He commented on HADOOP-10690:
--

Hi [~cnauroth], do we need to mark this ticket as closed since it is checked 
in? 

> Lack of synchronization on access to InputStream in 
> NativeAzureFileSystem#NativeAzureFsInputStream#close()
> --
>
> Key: HADOOP-10690
> URL: https://issues.apache.org/jira/browse/HADOOP-10690
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Chen He
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-10690.patch
>
>
> {code}
> public void close() throws IOException {
>   in.close();
> }
> {code}
> The close() method should be protected by synchronized keyword.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181962#comment-14181962
 ] 

Chris Douglas commented on HADOOP-11221:


{{Math.abs(Integer.MIN_VALUE)}} is still negative.

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182003#comment-14182003
 ] 

Hadoop QA commented on HADOOP-11170:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676704/HADOOP-11170.5.patch
  against trunk revision 7ab7545.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4939//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4939//console

This message is automatically generated.

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager throws Exception when trying to renewToken created by a peer

2014-10-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182036#comment-14182036
 ] 

Karthik Kambatla commented on HADOOP-11170:
---

Looks good to me. +1.

> ZKDelegationTokenSecretManager throws Exception when trying to renewToken 
> created by a peer
> ---
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10690) Lack of synchronization on access to InputStream in NativeAzureFileSystem#NativeAzureFsInputStream#close()

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182044#comment-14182044
 ] 

Hadoop QA commented on HADOOP-10690:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12650666/HADOOP-10690.patch
  against trunk revision 828429d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-azure.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4941//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4941//console

This message is automatically generated.

> Lack of synchronization on access to InputStream in 
> NativeAzureFileSystem#NativeAzureFsInputStream#close()
> --
>
> Key: HADOOP-10690
> URL: https://issues.apache.org/jira/browse/HADOOP-10690
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Chen He
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-10690.patch
>
>
> {code}
> public void close() throws IOException {
>   in.close();
> }
> {code}
> The close() method should be protected by synchronized keyword.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11170) ZKDelegationTokenSecretManager fails to renewToken created by a peer

2014-10-23 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11170:
--
Summary: ZKDelegationTokenSecretManager fails to renewToken created by a 
peer  (was: ZKDelegationTokenSecretManager throws Exception when trying to 
renewToken created by a peer)

> ZKDelegationTokenSecretManager fails to renewToken created by a peer
> 
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11167) ZKDelegationTokenSecretManager doesn't always handle zk node existing correctly

2014-10-23 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11167:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

The patch committed for HADOOP-11170 has the fix for this issue as well. 

> ZKDelegationTokenSecretManager doesn't always handle zk node existing 
> correctly
> ---
>
> Key: HADOOP-11167
> URL: https://issues.apache.org/jira/browse/HADOOP-11167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-11167.2.patch, HADOOP-11167.patch, 
> HADOOP-11167.patch
>
>
> The ZKDelegationTokenSecretManager is inconsistent in how it handles curator 
> checkExists calls.  Sometimes it assumes null response means the node exists, 
> sometimes it doesn't.  This causes it to be buggy in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11170) ZKDelegationTokenSecretManager fails to renewToken created by a peer

2014-10-23 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11170:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Arun and Greg for the contribution. Just committed this to trunk and 
branch-2. 

> ZKDelegationTokenSecretManager fails to renewToken created by a peer
> 
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.7.0
>
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11170) ZKDelegationTokenSecretManager fails to renewToken created by a peer

2014-10-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182231#comment-14182231
 ] 

Hudson commented on HADOOP-11170:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6327 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6327/])
HADOOP-11170. ZKDelegationTokenSecretManager fails to renewToken created by a 
peer. (Arun Suresh and Gregory Chanan via kasha) (kasha: rev 
db45f047ab6b19d8a3e7752bb2cde10827cd8dad)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java


> ZKDelegationTokenSecretManager fails to renewToken created by a peer
> 
>
> Key: HADOOP-11170
> URL: https://issues.apache.org/jira/browse/HADOOP-11170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.7.0
>
> Attachments: HADOOP-11170-11167-11122.combo.patch, 
> HADOOP-11170-11167-11122.combo.patch, HADOOP-11170-11167.combo.patch, 
> HADOOP-11170-11167.combo.patch, HADOOP-11170.1.patch, HADOOP-11170.2.patch, 
> HADOOP-11170.3.patch, HADOOP-11170.4.patch, HADOOP-11170.5.patch
>
>
> When a ZKDTSM tries to renew a token created by a peer, It throws an 
> Exception with message : 
> "bar is trying to renew a token with wrong password"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182244#comment-14182244
 ] 

Konstantin Shvachko commented on HADOOP-6857:
-

This looks good. +1 from me.
The test indeed failes on trunk and succeeds with the patch.
Let's check with Jenkins.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-23 Thread Byron Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Byron Wong updated HADOOP-6857:
---
Target Version/s: 2.7.0
Hadoop Flags:   (was: Incompatible change)
  Status: Patch Available  (was: Reopened)

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: HADOOP-11221.patch

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: (was: HADOOP-11221.patch)

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang updated HADOOP-11221:
--
Attachment: HADOOP-11221.v1.patch

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch, HADOOP-11221.v1.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Jinghui Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182279#comment-14182279
 ] 

Jinghui Wang commented on HADOOP-11221:
---

Hi Chris,

Thanks for pointing out the edge case there. I have updated the patch.

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch, HADOOP-11221.v1.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182281#comment-14182281
 ] 

Chris Douglas commented on HADOOP-11221:


How about {{hash = System.identityHashCode(k) & Integer.MAX_VALUE;}} ?

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch, HADOOP-11221.v1.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-23 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11157:

Attachment: HADOOP-11157.patch

Kicking off the job again.

> ZKDelegationTokenSecretManager never shuts down listenerThreadPool
> --
>
> Key: HADOOP-11157
> URL: https://issues.apache.org/jira/browse/HADOOP-11157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-11157.patch, HADOOP-11157.patch
>
>
> I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
> running into this issue.  The solr unit tests look for leaked threads and 
> when I started using the ZKDelegationTokenSecretManager it started reporting 
> leaks.  Shuting down the listenerThreadPool after the objects that use it 
> resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182305#comment-14182305
 ] 

Hadoop QA commented on HADOOP-11221:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676803/HADOOP-11221.v1.patch
  against trunk revision db45f04.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4943//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4943//console

This message is automatically generated.

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch, HADOOP-11221.v1.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11221) JAVA specification for hashcode does not enforce it to be non-negative, but IdentityHashStore assumes System.identityHashCode() is non-negative

2014-10-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182327#comment-14182327
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11221:
--

The capacity of an IdentityHashStore and buffer.length (=4*capacity) are powers 
of 2.  Therefore, we may use & to calculate positive mod, i.e.
{code}
final int hash = System.identityHashCode(k);
final int numEntries = buffer.length >> 1;
final int index = hash & (numEntries - 1);
{code}

> JAVA specification for hashcode does not enforce it to be non-negative, but 
> IdentityHashStore assumes System.identityHashCode() is non-negative
> ---
>
> Key: HADOOP-11221
> URL: https://issues.apache.org/jira/browse/HADOOP-11221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-11221.patch, HADOOP-11221.v1.patch
>
>
> The following code snippet shows that IdentityHashStore assumes the hashCode 
> is always non-negative.
> {code:borderStyle=solid}
>private void putInternal(Object k, Object v) {
>  int hash = System.identityHashCode(k);
>  final int numEntries = buffer.length / 2;
>  int index = hash % numEntries;
>...
>}
>
>   private int getElementIndex(K k) {
>  ...
>  final int numEntries = buffer.length / 2;
>  int hash = System.identityHashCode(k);
>  int index = hash % numEntries;
>  int firstIndex = index;
>  ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182326#comment-14182326
 ] 

Hadoop QA commented on HADOOP-11157:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676805/HADOOP-11157.patch
  against trunk revision db45f04.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4944//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4944//console

This message is automatically generated.

> ZKDelegationTokenSecretManager never shuts down listenerThreadPool
> --
>
> Key: HADOOP-11157
> URL: https://issues.apache.org/jira/browse/HADOOP-11157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-11157.patch, HADOOP-11157.patch
>
>
> I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
> running into this issue.  The solr unit tests look for leaked threads and 
> when I started using the ZKDelegationTokenSecretManager it started reporting 
> leaks.  Shuting down the listenerThreadPool after the objects that use it 
> resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182394#comment-14182394
 ] 

Hadoop QA commented on HADOOP-6857:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12676380/HADOOP-6857.patch
  against trunk revision db45f04.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFSForHA
  org.apache.hadoop.cli.TestHDFSCLI

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4942//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4942//console

This message is automatically generated.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2014-10-23 Thread Gopal V (JIRA)
Gopal V created HADOOP-11223:


 Summary: Offer a read-only conf alternative to new Configuration()
 Key: HADOOP-11223
 URL: https://issues.apache.org/jira/browse/HADOOP-11223
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Gopal V


new Configuration() is called from several static blocks across Hadoop.

This is incredibly inefficient, since each one of those involves primarily XML 
parsing at a point where the JIT won't be triggered & interpreter mode is 
essentially forced on the JVM.

The alternate solution would be to offer a {{Configuration::getDefault()}} 
alternative which disallows any modifications.

At the very least, such a method would need to be called from 

# org.apache.hadoop.io.nativeio.NativeIO::()
# org.apache.hadoop.security.SecurityUtil::()
# org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11224) Improve error messages for all permission related failures

2014-10-23 Thread Harsh J (JIRA)
Harsh J created HADOOP-11224:


 Summary: Improve error messages for all permission related failures
 Key: HADOOP-11224
 URL: https://issues.apache.org/jira/browse/HADOOP-11224
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
Reporter: Harsh J
Priority: Trivial


If a bad file create request fails, you get a juicy error self-describing the 
reason almost:

{code}Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=root, access=WRITE, 
inode="/":hdfs:supergroup:drwxr-xr-x{code}

However, if a setPermission fails, one only gets a vague:

{code}Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied{code}

It would be nicer if all forms of permission failures logged the accessed inode 
and current ownership and permissions in the same way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2014-10-23 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11223:
-
Component/s: conf

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>  Labels: Performance
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182453#comment-14182453
 ] 

Konstantin Shvachko commented on HADOOP-6857:
-

Looks like format change of du reporting broke TestHDFSCLI.
Not sure about TestWebHDFSForHA.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)