[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122522#comment-14122522
 ] 

Hadoop QA commented on HADOOP-11049:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1261/HADOOP-11049.patch
  against trunk revision 6104520.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.mapreduce.v2.util.TestMRApps

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4655//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4655//console

This message is automatically generated.

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-09-04 Thread Jonathan Allen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122494#comment-14122494
 ] 

Jonathan Allen commented on HADOOP-8989:


I should have time to update things this weekend.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122479#comment-14122479
 ] 

Hadoop QA commented on HADOOP-11062:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1260/HADOOP-11062.1.patch
  against trunk revision 6104520.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4656//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4656//console

This message is automatically generated.

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch
>
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-09-04 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10321:

Status: Open  (was: Patch Available)

Still running into Javadoc warnings.  Canceling patch until it can be fixed.

> TestCompositeService should cover all enumerations of adding a service to a 
> parent service
> --
>
> Key: HADOOP-10321
> URL: https://issues.apache.org/jira/browse/HADOOP-10321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HADOOP-10321-02.patch, HADOOP-10321-03.patch, 
> HADOOP10321-01.patch
>
>
> HADOOP-10085 fixes some synchronization issues in 
> CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11049) javax package system class default is too broad

2014-09-04 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11049:
-
Status: Patch Available  (was: Open)

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11049) javax package system class default is too broad

2014-09-04 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11049:
-
Attachment: HADOOP-11049.patch

Proposed patch.

Basically refined the "javax" package into subpackages, taking into account 
what is in JavaSE and what is in JavaEE. Looked at the best practices in 
dealing with system packages (such as OSGi). The idea is to spell out javax 
packages that are included in the JavaSE.

I also factored out the system classes default into a properties file. The main 
reason is to help people override this value more easily now that the list has 
become longer. Looking at the properties file would be significantly easier 
than checking out the source and getting the value from the java source file.

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11062:
-
Attachment: HADOOP-11062.1.patch

Re-uploading

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch
>
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-09-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122294#comment-14122294
 ] 

Akira AJISAKA commented on HADOOP-8989:
---

Hi [~jonallen], what's going on this jira? If you don't have time to work on 
this issue, I'd like to update the patch based on the above comments.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122277#comment-14122277
 ] 

Hadoop QA commented on HADOOP-11062:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1211/HADOOP-11062.1.patch
  against trunk revision f7df24b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

  The test build failed in 
hadoop-common-project/hadoop-common 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4654//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4654//console

This message is automatically generated.

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-11062.1.patch
>
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11042) CryptoInputStream throwing wrong exception class on errors

2014-09-04 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122236#comment-14122236
 ] 

Yi Liu commented on HADOOP-11042:
-

[~ste...@apache.org], could you help to take a look? Thanks.

> CryptoInputStream throwing wrong exception class on errors
> --
>
> Key: HADOOP-11042
> URL: https://issues.apache.org/jira/browse/HADOOP-11042
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11042.001.patch
>
>
> Having had a quick look at the {{CryptoInputStream}} class, it's not in sync 
> with all the other filesystem's exception logic, as specified in 
> {{src/site/markdown/filesystem/fsdatainputstream.md}}
> Operations MUST throw an {{IOException}} on out of bounds reads, ideally 
> {{EOFException}} :
> # {{read(byte[] b, int off, int len)}} 
> # {{seek(long pos) }}
> # {{seekToNewSource}}
> The tests you want to extend to verify expected behaviour are in 
> {{AbstractContractOpenTest}} and {{AbstractContractSeekTest}}
> also, the {{HasEnhancedByteBufferAccess}} implementations may want to think 
> about using {{checkStream()}} before acting on a potentially closed stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122187#comment-14122187
 ] 

Hadoop QA commented on HADOOP-11044:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1206/11044.patch4
  against trunk revision 51a4faf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1275 javac 
compiler warnings (more than the trunk's current 1263 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.ipc.TestFairCallQueue

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4653//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4653//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4653//console

This message is automatically generated.

> FileSystem counters can overflow for large number of readOps, largeReadOps, 
> writeOps
> 
>
> Key: HADOOP-11044
> URL: https://issues.apache.org/jira/browse/HADOOP-11044
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Swapnil Daingade
>Priority: Minor
> Attachments: 11044.patch4
>
>
> The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
> readOps, largeReadOps, writeOps as int. Also the The 
> org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
> getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
> values can overflow if the exceed 2^31-1 showing negative values. It would be 
> nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11062:
-
Status: Patch Available  (was: Open)

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-11062.1.patch
>
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11062:
-
Attachment: HADOOP-11062.1.patch

Uploading initial patch..

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-11062.1.patch
>
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-04 Thread Swapnil Daingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Daingade updated HADOOP-11044:
--
Attachment: (was: 11044.patch3)

> FileSystem counters can overflow for large number of readOps, largeReadOps, 
> writeOps
> 
>
> Key: HADOOP-11044
> URL: https://issues.apache.org/jira/browse/HADOOP-11044
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Swapnil Daingade
>Priority: Minor
> Attachments: 11044.patch4
>
>
> The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
> readOps, largeReadOps, writeOps as int. Also the The 
> org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
> getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
> values can overflow if the exceed 2^31-1 showing negative values. It would be 
> nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-04 Thread Swapnil Daingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Daingade updated HADOOP-11044:
--
Attachment: 11044.patch4

Trying submitting again

> FileSystem counters can overflow for large number of readOps, largeReadOps, 
> writeOps
> 
>
> Key: HADOOP-11044
> URL: https://issues.apache.org/jira/browse/HADOOP-11044
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Swapnil Daingade
>Priority: Minor
> Attachments: 11044.patch3, 11044.patch4
>
>
> The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
> readOps, largeReadOps, writeOps as int. Also the The 
> org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
> getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
> values can overflow if the exceed 2^31-1 showing negative values. It would be 
> nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11016) KMS should support signing cookies with zookeeper secret manager

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned HADOOP-11016:


Assignee: Arun Suresh  (was: Alejandro Abdelnur)

> KMS should support signing cookies with zookeeper secret manager
> 
>
> Key: HADOOP-11016
> URL: https://issues.apache.org/jira/browse/HADOOP-11016
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
>
> This will allow supporting multiple KMS instances behind a load-balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned HADOOP-11017:


Assignee: Arun Suresh  (was: Alejandro Abdelnur)

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11062:
--
Assignee: Arun Suresh  (was: Charles Lamb)

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb reassigned HADOOP-11062:
-

Assignee: Charles Lamb  (was: Andrew Wang)

> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Charles Lamb
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11063:
---
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

No tests are required, because this is a change in packaging only.

I committed this to trunk and branch-2.  Alejandro, thank you for the code 
review.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HADOOP-11063.1.patch
>
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121775#comment-14121775
 ] 

Hadoop QA commented on HADOOP-10758:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666536/HADOOP-10758.8.patch
  against trunk revision 1a09536.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4652//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4652//console

This message is automatically generated.

> KMS: add ACLs on per key basis.
> ---
>
> Key: HADOOP-10758
> URL: https://issues.apache.org/jira/browse/HADOOP-10758
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
> HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
> HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch
>
>
> The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-04 Thread Swapnil Daingade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121739#comment-14121739
 ] 

Swapnil Daingade commented on HADOOP-11044:
---

Looked at the test failures. I am not sure if these are directly related to the 
fix. Will investigate more. Is it possible that these were due to some 
intermittent issues?
Should I submit the same patch again? Wanted to check before I did as I don't 
want to consume resources

* org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testResponseCode
java.io.IOException: All datanodes 127.0.0.1:48517 are bad. Aborting...
at 
org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:163)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:343)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:776)
at 
org.apache.hadoop.hdfs.AppendTestUtil.testAppend(AppendTestUtil.java:198)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testResponseCode(TestWebHdfsFileSystemContract.java:461)

* 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testPipelineRecoveryStress
java.lang.RuntimeException: Deferred
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.waitFor(MultithreadedTestUtil.java:121)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testPipelineRecoveryStress(TestPipelinesFailover.java:485)
Caused by: org.apache.hadoop.ipc.RemoteException: File /test-21 could only be 
replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) 
running and 3 node(s) are excluded in this operation.


> FileSystem counters can overflow for large number of readOps, largeReadOps, 
> writeOps
> 
>
> Key: HADOOP-11044
> URL: https://issues.apache.org/jira/browse/HADOOP-11044
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Swapnil Daingade
>Priority: Minor
> Attachments: 11044.patch3
>
>
> The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
> readOps, largeReadOps, writeOps as int. Also the The 
> org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
> getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
> values can overflow if the exceed 2^31-1 showing negative values. It would be 
> nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-10758:
-
Attachment: HADOOP-10758.8.patch

Uploading patch addressing [~tucu00]'s review feedback

> KMS: add ACLs on per key basis.
> ---
>
> Key: HADOOP-10758
> URL: https://issues.apache.org/jira/browse/HADOOP-10758
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
> HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
> HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch
>
>
> The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121732#comment-14121732
 ] 

Hadoop QA commented on HADOOP-11063:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666525/HADOOP-11063.1.patch
  against trunk revision 1a09536.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4651//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4651//console

This message is automatically generated.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-11063.1.patch
>
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11063:
---
Status: Patch Available  (was: In Progress)

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-11063.1.patch
>
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121688#comment-14121688
 ] 

Alejandro Abdelnur commented on HADOOP-11063:
-

LGTM, +1

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-11063.1.patch
>
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121682#comment-14121682
 ] 

Hadoop QA commented on HADOOP-11048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666515/HADOOP-11048.patch
  against trunk revision 91d45f0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4650//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4650//console

This message is automatically generated.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11063:
---
Attachment: HADOOP-11063.1.patch

Here is the patch.  It turns out what we really want is 
{{true}}.  {{attachClasses}} helps publish the 
jar as an artifact for use in other projects, but we don't need to expose these 
classes beyond KMS.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-11063.1.patch
>
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121674#comment-14121674
 ] 

Hadoop QA commented on HADOOP-11048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666515/HADOOP-11048.patch
  against trunk revision 91d45f0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4649//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4649//console

This message is automatically generated.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121611#comment-14121611
 ] 

Chris Nauroth commented on HADOOP-11063:


[~tucu00], thanks for the tip.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121592#comment-14121592
 ] 

Alejandro Abdelnur commented on HADOOP-11063:
-

[~cnauroth], no need to split KMS in 2 modules, adding 
{{true}} to the configuration of the war plugin 
in the kms pom would create a KMS JAR and use it within the WAR.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-04 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11048:
-
Attachment: HADOOP-11048.patch

Updated the comment to match the new temp location.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121582#comment-14121582
 ] 

Chris Nauroth commented on HADOOP-11063:


I have a patch in progress that splits the build into a hadoop-kms-lib module 
for the main class files and hadoop-kms for the final built war.  With this in 
place, the long class file names get packaged into lib/hadoop-kms-lib.jar 
inside the war so that we avoid the long path problem without needing to change 
a lot of class names and do arbitrary refactorings just to satisfy the path 
length limitation.

> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11063 started by Chris Nauroth.
--
> KMS cannot deploy on Windows, because class names are too long.
> ---
>
> Key: HADOOP-11063
> URL: https://issues.apache.org/jira/browse/HADOOP-11063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> Windows has a maximum path length of 260 characters.  KMS includes several 
> long class file names.  During packaging and creation of the distro, these 
> paths get even longer because of prepending the standard war directory 
> structure and our share/hadoop/etc. structure.  The end result is that the 
> final paths are longer than 260 characters, making it impossible to deploy a 
> distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-11063:
--

 Summary: KMS cannot deploy on Windows, because class names are too 
long.
 Key: HADOOP-11063
 URL: https://issues.apache.org/jira/browse/HADOOP-11063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker


Windows has a maximum path length of 260 characters.  KMS includes several long 
class file names.  During packaging and creation of the distro, these paths get 
even longer because of prepending the standard war directory structure and our 
share/hadoop/etc. structure.  The end result is that the final paths are longer 
than 260 characters, making it impossible to deploy a distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121570#comment-14121570
 ] 

Steve Loughran commented on HADOOP-11062:
-

# and {{Pnative-win}}, of course
# Or downgrade them to skip if openssl isn't on the path, if that is the 
problem.

FWIW SLIDER-394 covers failing fast if something is missing (winutils.exe, 
python, openssl); the probe is straightforward...run the relevant version 
command (openssl version, python --version) then inspect the status code & grep 
for expected strings back

[[https://github.com/apache/incubator-slider/blob/develop/slider-core/src/main/java/org/apache/slider/common/tools/SliderUtils.java#L1770]]




> CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
> --
>
> Key: HADOOP-11062
> URL: https://issues.apache.org/jira/browse/HADOOP-11062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Andrew Wang
>
> there are a few testcases, cryptocodec related that require Hadoop native 
> code and OpenSSL.
> These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11056) OsSecureRandom.setConf() might leak file descriptors.

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121537#comment-14121537
 ] 

Hudson commented on HADOOP-11056:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1886 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1886/])
HADOOP-11056. OsSecureRandom.setConf() might leak file descriptors.  
Contributed by Yongjun Zhang. (cmccabe: rev 
8f1a668575d35bee11f4cd8173335be5352ec620)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random/OsSecureRandom.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random/TestOsSecureRandom.java


> OsSecureRandom.setConf() might leak file descriptors.
> -
>
> Key: HADOOP-11056
> URL: https://issues.apache.org/jira/browse/HADOOP-11056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HADOOP-11056.001.patch, HADOOP-11056.002.patch
>
>
> OsSecureRandom.setConf() might leak resource, the stream is not closed when:
> 1. if setConf() is called a second time
> 2. if {{fillReservoir(0);}} throw exception.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10863) KMS should have a blacklist for decrypting EEKs

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121536#comment-14121536
 ] 

Hudson commented on HADOOP-10863:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1886 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1886/])
HADOOP-10863. KMS should have a blacklist for decrypting EEKs. (asuresh via 
tucu) (tucu: rev d9a03e272adbf3e9fde501610400f18fb4f6b865)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
* hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java


> KMS should have a blacklist for decrypting EEKs
> ---
>
> Key: HADOOP-10863
> URL: https://issues.apache.org/jira/browse/HADOOP-10863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-10863.1.patch, HADOOP-10863.2.patch, 
> HADOOP-10863.3.patch, HADOOP-10863.4.patch, HADOOP-10863.5.patch
>
>
> In particular, we'll need to put HDFS admin user there by default to prevent 
> an HDFS admin from getting file encryption keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs and necessary txt files

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121542#comment-14121542
 ] 

Hudson commented on HADOOP-10956:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1886 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1886/])
HADOOP-10956. Fix create-release script to include docs and necessary txt 
files. (kasha) (kasha: rev ce0462129fe09898fd9b169cae0564cb6d9bc419)
* LICENSE.txt
* README.txt
* hadoop-hdfs-project/hadoop-hdfs/LICENSE.txt
* hadoop-yarn-project/LICENSE.txt
* hadoop-common-project/hadoop-common/README.txt
* hadoop-mapreduce-project/NOTICE.txt
* hadoop-hdfs-project/hadoop-hdfs/NOTICE.txt
* hadoop-common-project/hadoop-common/LICENSE.txt
* dev-support/create-release.sh
* hadoop-mapreduce-project/LICENSE.txt
* NOTICE.txt
* hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
* hadoop-yarn-project/NOTICE.txt
* hadoop-common-project/hadoop-common/NOTICE.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-dist/pom.xml


> Fix create-release script to include docs and necessary txt files
> -
>
> Key: HADOOP-10956
> URL: https://issues.apache.org/jira/browse/HADOOP-10956
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.5.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.5.1
>
> Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
> hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch, 
> hadoop-10956-5.patch
>
>
> The create-release script doesn't include docs in the binary tarball. We 
> should fix that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11062:
---

 Summary: CryptoCodec testcases requiring OpenSSL should be run 
only if -Pnative is used
 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Andrew Wang


there are a few testcases, cryptocodec related that require Hadoop native code 
and OpenSSL.

These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-04 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11060:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Yi. Committed to trunk and branch-2.

> Create a CryptoCodec test that verifies interoperability between the JCE and 
> OpenSSL implementations
> 
>
> Key: HADOOP-11060
> URL: https://issues.apache.org/jira/browse/HADOOP-11060
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Yi Liu
> Fix For: 2.6.0
>
> Attachments: HADOOP-11060.001.patch
>
>
> We should have a test that verifies writing with one codec implementation and 
> reading with other works, including some random seeks. This should be tested 
> in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-04 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121513#comment-14121513
 ] 

Alejandro Abdelnur commented on HADOOP-11060:
-

+1

> Create a CryptoCodec test that verifies interoperability between the JCE and 
> OpenSSL implementations
> 
>
> Key: HADOOP-11060
> URL: https://issues.apache.org/jira/browse/HADOOP-11060
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Yi Liu
> Attachments: HADOOP-11060.001.patch
>
>
> We should have a test that verifies writing with one codec implementation and 
> reading with other works, including some random seeks. This should be tested 
> in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-04 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11054:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

> Add a KeyProvider instantiation based on a URI
> --
>
> Key: HADOOP-11054
> URL: https://issues.apache.org/jira/browse/HADOOP-11054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.6.0
>
> Attachments: HADOOP-11054.patch
>
>
> Currently there is no way to instantiate a {{KeyProvider}} given a URI.
> In the case of HDFS encryption, it would be desirable to be explicitly 
> specify a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-09-04 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11015:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

> Http server/client utils to propagate and recreate Exceptions from server to 
> client
> ---
>
> Key: HADOOP-11015
> URL: https://issues.apache.org/jira/browse/HADOOP-11015
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.6.0
>
> Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
> HADOOP-11015.patch, HADOOP-11015.patch
>
>
> While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
> improvement was to propagate the server side exceptions to the client in the 
> same way WebHDFS does it.
> This JIRA is to provide a utility class to do the same and refactor HttpFS 
> and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-04 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121462#comment-14121462
 ] 

Aaron T. Myers commented on HADOOP-11054:
-

+1, the patch looks good to me.

> Add a KeyProvider instantiation based on a URI
> --
>
> Key: HADOOP-11054
> URL: https://issues.apache.org/jira/browse/HADOOP-11054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-11054.patch
>
>
> Currently there is no way to instantiate a {{KeyProvider}} given a URI.
> In the case of HDFS encryption, it would be desirable to be explicitly 
> specify a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10388) Pure native hadoop client

2014-09-04 Thread Zhanwei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121416#comment-14121416
 ] 

Zhanwei Wang commented on HADOOP-10388:
---

Hi all

I have open sourced the libhdfs3, which is a native C/C++ client developed by 
Pivotal and used in HAWQ. See HDFS-6994

> Pure native hadoop client
> -
>
> Key: HADOOP-10388
> URL: https://issues.apache.org/jira/browse/HADOOP-10388
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: 2014-06-13_HADOOP-10388_design.pdf
>
>
> A pure native hadoop client has following use case/advantages:
> 1.  writing Yarn applications using c++
> 2.  direct access to HDFS, without extra proxy overhead, comparing to web/nfs 
> interface.
> 3.  wrap native library to support more languages, e.g. python
> 4.  lightweight, small footprint compare to several hundred MB of JDK and 
> hadoop library with various dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11056) OsSecureRandom.setConf() might leak file descriptors.

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121353#comment-14121353
 ] 

Hudson commented on HADOOP-11056:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1861 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1861/])
HADOOP-11056. OsSecureRandom.setConf() might leak file descriptors.  
Contributed by Yongjun Zhang. (cmccabe: rev 
8f1a668575d35bee11f4cd8173335be5352ec620)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random/OsSecureRandom.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random/TestOsSecureRandom.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> OsSecureRandom.setConf() might leak file descriptors.
> -
>
> Key: HADOOP-11056
> URL: https://issues.apache.org/jira/browse/HADOOP-11056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HADOOP-11056.001.patch, HADOOP-11056.002.patch
>
>
> OsSecureRandom.setConf() might leak resource, the stream is not closed when:
> 1. if setConf() is called a second time
> 2. if {{fillReservoir(0);}} throw exception.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs and necessary txt files

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121358#comment-14121358
 ] 

Hudson commented on HADOOP-10956:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1861 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1861/])
HADOOP-10956. Fix create-release script to include docs and necessary txt 
files. (kasha) (kasha: rev ce0462129fe09898fd9b169cae0564cb6d9bc419)
* hadoop-hdfs-project/hadoop-hdfs/LICENSE.txt
* hadoop-mapreduce-project/LICENSE.txt
* README.txt
* hadoop-yarn-project/NOTICE.txt
* LICENSE.txt
* hadoop-common-project/hadoop-common/README.txt
* dev-support/create-release.sh
* hadoop-hdfs-project/hadoop-hdfs/NOTICE.txt
* hadoop-yarn-project/LICENSE.txt
* NOTICE.txt
* hadoop-common-project/hadoop-common/LICENSE.txt
* hadoop-dist/pom.xml
* hadoop-mapreduce-project/NOTICE.txt
* hadoop-common-project/hadoop-common/NOTICE.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml


> Fix create-release script to include docs and necessary txt files
> -
>
> Key: HADOOP-10956
> URL: https://issues.apache.org/jira/browse/HADOOP-10956
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.5.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.5.1
>
> Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
> hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch, 
> hadoop-10956-5.patch
>
>
> The create-release script doesn't include docs in the binary tarball. We 
> should fix that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10863) KMS should have a blacklist for decrypting EEKs

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121352#comment-14121352
 ] 

Hudson commented on HADOOP-10863:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1861 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1861/])
HADOOP-10863. KMS should have a blacklist for decrypting EEKs. (asuresh via 
tucu) (tucu: rev d9a03e272adbf3e9fde501610400f18fb4f6b865)
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
* hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java


> KMS should have a blacklist for decrypting EEKs
> ---
>
> Key: HADOOP-10863
> URL: https://issues.apache.org/jira/browse/HADOOP-10863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-10863.1.patch, HADOOP-10863.2.patch, 
> HADOOP-10863.3.patch, HADOOP-10863.4.patch, HADOOP-10863.5.patch
>
>
> In particular, we'll need to put HDFS admin user there by default to prevent 
> an HDFS admin from getting file encryption keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11061) Make Shell probe for winutils.exe more rigorous

2014-09-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11061:
---

 Summary: Make Shell probe for winutils.exe more rigorous
 Key: HADOOP-11061
 URL: https://issues.apache.org/jira/browse/HADOOP-11061
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.5.0
 Environment: windows
Reporter: Steve Loughran
Priority: Minor


The probe for winutils.exe being valid is simple: it looks for the file 
existing.

It could be stricter and catch some (unlikely but possible) failure modes:

# winutils.exe being a directory
# winutils.exe being a 0-byte file
# winutils.exe not being readable
# winutils.exe not having the magic "MZ" header

These tests could all be combined simply by opening the file and validating the 
header ... all the conditions above would be detected



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs and necessary txt files

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121256#comment-14121256
 ] 

Hudson commented on HADOOP-10956:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #670 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/670/])
HADOOP-10956. Fix create-release script to include docs and necessary txt 
files. (kasha) (kasha: rev ce0462129fe09898fd9b169cae0564cb6d9bc419)
* hadoop-common-project/hadoop-common/README.txt
* NOTICE.txt
* README.txt
* LICENSE.txt
* hadoop-common-project/hadoop-common/LICENSE.txt
* hadoop-yarn-project/NOTICE.txt
* dev-support/create-release.sh
* hadoop-yarn-project/LICENSE.txt
* hadoop-mapreduce-project/LICENSE.txt
* hadoop-common-project/hadoop-common/NOTICE.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
* hadoop-mapreduce-project/NOTICE.txt
* hadoop-dist/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/LICENSE.txt
* hadoop-hdfs-project/hadoop-hdfs/NOTICE.txt


> Fix create-release script to include docs and necessary txt files
> -
>
> Key: HADOOP-10956
> URL: https://issues.apache.org/jira/browse/HADOOP-10956
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.5.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.5.1
>
> Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
> hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch, 
> hadoop-10956-5.patch
>
>
> The create-release script doesn't include docs in the binary tarball. We 
> should fix that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11056) OsSecureRandom.setConf() might leak file descriptors.

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121251#comment-14121251
 ] 

Hudson commented on HADOOP-11056:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #670 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/670/])
HADOOP-11056. OsSecureRandom.setConf() might leak file descriptors.  
Contributed by Yongjun Zhang. (cmccabe: rev 
8f1a668575d35bee11f4cd8173335be5352ec620)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random/TestOsSecureRandom.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random/OsSecureRandom.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> OsSecureRandom.setConf() might leak file descriptors.
> -
>
> Key: HADOOP-11056
> URL: https://issues.apache.org/jira/browse/HADOOP-11056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HADOOP-11056.001.patch, HADOOP-11056.002.patch
>
>
> OsSecureRandom.setConf() might leak resource, the stream is not closed when:
> 1. if setConf() is called a second time
> 2. if {{fillReservoir(0);}} throw exception.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10863) KMS should have a blacklist for decrypting EEKs

2014-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121250#comment-14121250
 ] 

Hudson commented on HADOOP-10863:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #670 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/670/])
HADOOP-10863. KMS should have a blacklist for decrypting EEKs. (asuresh via 
tucu) (tucu: rev d9a03e272adbf3e9fde501610400f18fb4f6b865)
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> KMS should have a blacklist for decrypting EEKs
> ---
>
> Key: HADOOP-10863
> URL: https://issues.apache.org/jira/browse/HADOOP-10863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-10863.1.patch, HADOOP-10863.2.patch, 
> HADOOP-10863.3.patch, HADOOP-10863.4.patch, HADOOP-10863.5.patch
>
>
> In particular, we'll need to put HDFS admin user there by default to prevent 
> an HDFS admin from getting file encryption keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121246#comment-14121246
 ] 

Hadoop QA commented on HADOOP-9822:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12634912/HADOOP-9822.3.patch
  against trunk revision 8f1a668.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4648//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4648//console

This message is automatically generated.

> create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
> RetryCache constructor
> ---
>
> Key: HADOOP-9822
> URL: https://issues.apache.org/jira/browse/HADOOP-9822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch, 
> HADOOP-9822.3.patch
>
>
> The magic number "16" is also used in ClientId.BYTE_LENGTH, so hard-coding 
> magic number "16" is a bit confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2014-09-04 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121221#comment-14121221
 ] 

Tsuyoshi OZAWA commented on HADOOP-9822:


[~cmccabe], do you mind taking a look? I think it's ready for review.

> create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
> RetryCache constructor
> ---
>
> Key: HADOOP-9822
> URL: https://issues.apache.org/jira/browse/HADOOP-9822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch, 
> HADOOP-9822.3.patch
>
>
> The magic number "16" is also used in ClientId.BYTE_LENGTH, so hard-coding 
> magic number "16" is a bit confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121204#comment-14121204
 ] 

Hadoop QA commented on HADOOP-11060:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12666444/HADOOP-11060.001.patch
  against trunk revision 8f1a668.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4647//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4647//console

This message is automatically generated.

> Create a CryptoCodec test that verifies interoperability between the JCE and 
> OpenSSL implementations
> 
>
> Key: HADOOP-11060
> URL: https://issues.apache.org/jira/browse/HADOOP-11060
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Yi Liu
> Attachments: HADOOP-11060.001.patch
>
>
> We should have a test that verifies writing with one codec implementation and 
> reading with other works, including some random seeks. This should be tested 
> in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-04 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11060:

Status: Patch Available  (was: Open)

> Create a CryptoCodec test that verifies interoperability between the JCE and 
> OpenSSL implementations
> 
>
> Key: HADOOP-11060
> URL: https://issues.apache.org/jira/browse/HADOOP-11060
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Yi Liu
> Attachments: HADOOP-11060.001.patch
>
>
> We should have a test that verifies writing with one codec implementation and 
> reading with other works, including some random seeks. This should be tested 
> in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-04 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11060:

Attachment: HADOOP-11060.001.patch

Update the tests.
1. Encrypt using JCE and decrypt using OpenSSL, and vice versa
2. Seek to some position and test decrypt.

> Create a CryptoCodec test that verifies interoperability between the JCE and 
> OpenSSL implementations
> 
>
> Key: HADOOP-11060
> URL: https://issues.apache.org/jira/browse/HADOOP-11060
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Yi Liu
> Attachments: HADOOP-11060.001.patch
>
>
> We should have a test that verifies writing with one codec implementation and 
> reading with other works, including some random seeks. This should be tested 
> in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11050) hconf.c: fix bug where we would sometimes not try to load multiple XML files from the same path

2014-09-04 Thread Wenwu Peng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121057#comment-14121057
 ] 

Wenwu Peng commented on HADOOP-11050:
-

Thanks Colin, +1

> hconf.c: fix bug where we would sometimes not try to load multiple XML files 
> from the same path
> ---
>
> Key: HADOOP-11050
> URL: https://issues.apache.org/jira/browse/HADOOP-11050
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 001-HADOOP-11050.patch
>
>
> hconf.c: fix bug where we would sometimes not try to load multiple XML files 
> from the same path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121054#comment-14121054
 ] 

Hadoop QA commented on HADOOP-11048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666423/HADOOP-11048.patch
  against trunk revision 8f1a668.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4646//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4646//console

This message is automatically generated.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)