[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-09-05 Thread Jonathan Allen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122494#comment-14122494
 ] 

Jonathan Allen commented on HADOOP-8989:


I should have time to update things this weekend.

 hadoop dfs -find feature
 

 Key: HADOOP-8989
 URL: https://issues.apache.org/jira/browse/HADOOP-8989
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Marco Nicosia
Assignee: Jonathan Allen
 Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch


 Both sysadmins and users make frequent use of the unix 'find' command, but 
 Hadoop has no correlate. Without this, users are writing scripts which make 
 heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
 -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
 client side. Possibly an in-NameNode find operation would be only a bit more 
 taxing on the NameNode, but significantly faster from the client's point of 
 view?
 The minimum set of options I can think of which would make a Hadoop find 
 command generally useful is (in priority order):
 * -type (file or directory, for now)
 * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
 * -print0 (for piping to xargs -0)
 * -depth
 * -owner/-group (and -nouser/-nogroup)
 * -name (allowing for shell pattern, or even regex?)
 * -perm
 * -size
 One possible special case, but could possibly be really cool if it ran from 
 within the NameNode:
 * -delete
 The hadoop dfs -lsr | hadoop dfs -rm cycle is really, really slow.
 Lower priority, some people do use operators, mostly to execute -or searches 
 such as:
 * find / \(-nouser -or -nogroup\)
 Finally, I thought I'd include a link to the [Posix spec for 
 find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122522#comment-14122522
 ] 

Hadoop QA commented on HADOOP-11049:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1261/HADOOP-11049.patch
  against trunk revision 6104520.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.mapreduce.v2.util.TestMRApps

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4655//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4655//console

This message is automatically generated.

 javax package system class default is too broad
 ---

 Key: HADOOP-11049
 URL: https://issues.apache.org/jira/browse/HADOOP-11049
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-11049.patch


 The system class default defined in ApplicationClassLoader has javax.. This 
 is too broad. The intent of the system classes is to exempt classes that are 
 provided by the JDK along with hadoop and minimally necessary dependencies 
 that are guaranteed to be on the system classpath. javax. is too broad for 
 that.
 For example, JSR-330 which is part of JavaEE (not JavaSE) has javax.inject. 
 Packages like them should not be declared as system classes, as they will 
 result in ClassNotFoundException if they are needed and present on the user 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes

2014-09-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11064:
---

 Summary: UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 
due NativeCRC32 method changes
 Key: HADOOP-11064
 URL: https://issues.apache.org/jira/browse/HADOOP-11064
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
 Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
2.4 JARs
Reporter: Steve Loughran
Priority: Blocker


The private native method names and signatures in {{NativeCrc32}} were changed 
in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed link errors 
when they try to perform checksums. 

This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes

2014-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122721#comment-14122721
 ] 

Steve Loughran commented on HADOOP-11064:
-

Stack trace from HBase
{code}
FATAL [master:c6401:45972] master.HMaster: Unhandled exception. 
Startingshutdown.java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;J)V
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:338)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.fillSlowReadBuffer(BlockReaderLocal.java:388)
at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:408)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:752)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:495)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:582)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:128)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:744)
{code}

 UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 
 method changes
 --

 Key: HADOOP-11064
 URL: https://issues.apache.org/jira/browse/HADOOP-11064
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
 Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
 2.4 JARs
Reporter: Steve Loughran
Priority: Blocker

 The private native method names and signatures in {{NativeCrc32}} were 
 changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed 
 link errors when they try to perform checksums. 
 This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
 rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes

2014-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122731#comment-14122731
 ] 

Steve Loughran commented on HADOOP-11064:
-

The culprit appears to be that the {{nativeVerify}} methods have been renamed 
{{nativeCompute}} and their signature changed. If they were private and in the 
same class this would not be an issue —but they are really in the external 
{{hadoop.so}} lib, which is now out of sync with any hadoop applications using 
hadoop.jar  2.6


{code}
private static native void nativeVerifyChunkedSums(
  int bytesPerSum, int checksumType,
  ByteBuffer sums, int sumsOffset,
  ByteBuffer data, int dataOffset, int dataLength,
  String fileName, long basePos);
{code}

After

{code}
private static native void nativeComputeChunkedSums(
  int bytesPerSum, int checksumType,
  ByteBuffer sums, int sumsOffset,
  ByteBuffer data, int dataOffset, int dataLength,
  String fileName, long basePos, boolean verify);
{code}

The obvious fix would be to reinstate the existing methods/signatures and relay 
internally to the new methods.

 UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 
 method changes
 --

 Key: HADOOP-11064
 URL: https://issues.apache.org/jira/browse/HADOOP-11064
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
 Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
 2.4 JARs
Reporter: Steve Loughran
Priority: Blocker

 The private native method names and signatures in {{NativeCrc32}} were 
 changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed 
 link errors when they try to perform checksums. 
 This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
 rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122819#comment-14122819
 ] 

Hudson commented on HADOOP-11060:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #671 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/671/])
HADOOP-11060. Create a CryptoCodec test that verifies interoperability between 
the JCE and OpenSSL implementations. (hitliuyi via tucu) (tucu: rev 
b69a48c988c147abf192e36c99e2d4aecc116339)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java


 Create a CryptoCodec test that verifies interoperability between the JCE and 
 OpenSSL implementations
 

 Key: HADOOP-11060
 URL: https://issues.apache.org/jira/browse/HADOOP-11060
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: 2.6.0

 Attachments: HADOOP-11060.001.patch


 We should have a test that verifies writing with one codec implementation and 
 reading with other works, including some random seeks. This should be tested 
 in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122815#comment-14122815
 ] 

Hudson commented on HADOOP-11015:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #671 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/671/])
HADOOP-11015. Http server/client utils to propagate and recreate Exceptions 
from server to client. (tucu) (tucu: rev 
70b218748badf079c859c3af2b468a0b7b49c333)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ExceptionProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java
* hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoACLs.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122813#comment-14122813
 ] 

Hudson commented on HADOOP-11063:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #671 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/671/])
HADOOP-11063. KMS cannot deploy on Windows, because class names are too long. 
Contributed by Chris Nauroth. (cnauroth: rev 
b44b2ee4adb78723c221a7da8fd35ed011d0905c)
* hadoop-common-project/hadoop-kms/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 KMS cannot deploy on Windows, because class names are too long.
 ---

 Key: HADOOP-11063
 URL: https://issues.apache.org/jira/browse/HADOOP-11063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-11063.1.patch


 Windows has a maximum path length of 260 characters.  KMS includes several 
 long class file names.  During packaging and creation of the distro, these 
 paths get even longer because of prepending the standard war directory 
 structure and our share/hadoop/etc. structure.  The end result is that the 
 final paths are longer than 260 characters, making it impossible to deploy a 
 distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122816#comment-14122816
 ] 

Hudson commented on HADOOP-11054:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #671 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/671/])
HADOOP-11054. Add a KeyProvider instantiation based on a URI. (tucu) (tucu: rev 
41f1662d467ec0b295b742bb80c87482504fbf25)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java


 Add a KeyProvider instantiation based on a URI
 --

 Key: HADOOP-11054
 URL: https://issues.apache.org/jira/browse/HADOOP-11054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11054.patch


 Currently there is no way to instantiate a {{KeyProvider}} given a URI.
 In the case of HDFS encryption, it would be desirable to be explicitly 
 specify a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-05 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122883#comment-14122883
 ] 

Charles Lamb commented on HADOOP-11062:
---

Hi [~asuresh],

I was under the impression from [~tucu00] that if -Pnative is not specified, 
then some sort of helpful warning should be written to the log. I applied your 
patch and ran TestCryptoCodec without the native profile using

{quote}
mvn test -Dtest=TestCryptoCodec
{quote}

The surefire report only shows that the tests in TCC were skipped and there's 
no corresponding output.

{quote}
---
Test set: org.apache.hadoop.crypto.TestCryptoCodec
---
Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 0.268 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
{quote}

Actually, I may be doing something wrong, but running with the native profile

{quote}
mvn -Pnative test -Dtest=TestCryptoCodec
{quote}

produces the same surefire output (i.e. it skips both tests).

If the only requirement is that we skip the tests cleanly, then this patch is 
fine.

Other tests like TestCryptoStreamsWithOpensslAesCtrCryptoCodec, 
TestOpensslSecureRandom, and TestCryptoStreamsForLocalFS generate this warning

{quote}
2014-09-05 08:11:56,484 WARN  util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
{quote}

I don't know if that's a sufficient warning or not.


 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122941#comment-14122941
 ] 

Hudson commented on HADOOP-11063:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1862 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1862/])
HADOOP-11063. KMS cannot deploy on Windows, because class names are too long. 
Contributed by Chris Nauroth. (cnauroth: rev 
b44b2ee4adb78723c221a7da8fd35ed011d0905c)
* hadoop-common-project/hadoop-kms/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 KMS cannot deploy on Windows, because class names are too long.
 ---

 Key: HADOOP-11063
 URL: https://issues.apache.org/jira/browse/HADOOP-11063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-11063.1.patch


 Windows has a maximum path length of 260 characters.  KMS includes several 
 long class file names.  During packaging and creation of the distro, these 
 paths get even longer because of prepending the standard war directory 
 structure and our share/hadoop/etc. structure.  The end result is that the 
 final paths are longer than 260 characters, making it impossible to deploy a 
 distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122943#comment-14122943
 ] 

Hudson commented on HADOOP-11015:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1862 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1862/])
HADOOP-11015. Http server/client utils to propagate and recreate Exceptions 
from server to client. (tucu) (tucu: rev 
70b218748badf079c859c3af2b468a0b7b49c333)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoACLs.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ExceptionProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java


 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122948#comment-14122948
 ] 

Hudson commented on HADOOP-11060:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1862 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1862/])
HADOOP-11060. Create a CryptoCodec test that verifies interoperability between 
the JCE and OpenSSL implementations. (hitliuyi via tucu) (tucu: rev 
b69a48c988c147abf192e36c99e2d4aecc116339)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java


 Create a CryptoCodec test that verifies interoperability between the JCE and 
 OpenSSL implementations
 

 Key: HADOOP-11060
 URL: https://issues.apache.org/jira/browse/HADOOP-11060
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: 2.6.0

 Attachments: HADOOP-11060.001.patch


 We should have a test that verifies writing with one codec implementation and 
 reading with other works, including some random seeks. This should be tested 
 in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122945#comment-14122945
 ] 

Hudson commented on HADOOP-11054:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1862 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1862/])
HADOOP-11054. Add a KeyProvider instantiation based on a URI. (tucu) (tucu: rev 
41f1662d467ec0b295b742bb80c87482504fbf25)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java


 Add a KeyProvider instantiation based on a URI
 --

 Key: HADOOP-11054
 URL: https://issues.apache.org/jira/browse/HADOOP-11054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11054.patch


 Currently there is no way to instantiate a {{KeyProvider}} given a URI.
 In the case of HDFS encryption, it would be desirable to be explicitly 
 specify a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122956#comment-14122956
 ] 

Hudson commented on HADOOP-11063:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1887 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1887/])
HADOOP-11063. KMS cannot deploy on Windows, because class names are too long. 
Contributed by Chris Nauroth. (cnauroth: rev 
b44b2ee4adb78723c221a7da8fd35ed011d0905c)
* hadoop-common-project/hadoop-kms/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 KMS cannot deploy on Windows, because class names are too long.
 ---

 Key: HADOOP-11063
 URL: https://issues.apache.org/jira/browse/HADOOP-11063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-11063.1.patch


 Windows has a maximum path length of 260 characters.  KMS includes several 
 long class file names.  During packaging and creation of the distro, these 
 paths get even longer because of prepending the standard war directory 
 structure and our share/hadoop/etc. structure.  The end result is that the 
 final paths are longer than 260 characters, making it impossible to deploy a 
 distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11060) Create a CryptoCodec test that verifies interoperability between the JCE and OpenSSL implementations

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122962#comment-14122962
 ] 

Hudson commented on HADOOP-11060:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1887 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1887/])
HADOOP-11060. Create a CryptoCodec test that verifies interoperability between 
the JCE and OpenSSL implementations. (hitliuyi via tucu) (tucu: rev 
b69a48c988c147abf192e36c99e2d4aecc116339)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java


 Create a CryptoCodec test that verifies interoperability between the JCE and 
 OpenSSL implementations
 

 Key: HADOOP-11060
 URL: https://issues.apache.org/jira/browse/HADOOP-11060
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: 2.6.0

 Attachments: HADOOP-11060.001.patch


 We should have a test that verifies writing with one codec implementation and 
 reading with other works, including some random seeks. This should be tested 
 in both directions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122959#comment-14122959
 ] 

Hudson commented on HADOOP-11054:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1887 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1887/])
HADOOP-11054. Add a KeyProvider instantiation based on a URI. (tucu) (tucu: rev 
41f1662d467ec0b295b742bb80c87482504fbf25)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Add a KeyProvider instantiation based on a URI
 --

 Key: HADOOP-11054
 URL: https://issues.apache.org/jira/browse/HADOOP-11054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11054.patch


 Currently there is no way to instantiate a {{KeyProvider}} given a URI.
 In the case of HDFS encryption, it would be desirable to be explicitly 
 specify a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122958#comment-14122958
 ] 

Hudson commented on HADOOP-11015:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1887 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1887/])
HADOOP-11015. Http server/client utils to propagate and recreate Exceptions 
from server to client. (tucu) (tucu: rev 
70b218748badf079c859c3af2b468a0b7b49c333)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ExceptionProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoACLs.java
* hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java


 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123025#comment-14123025
 ] 

Arun Suresh commented on HADOOP-11062:
--

[~clamb], So the test is skipping for 2 different reasons.
In the first case, when {{-Pnative}} is not specified, it skips because it 
doesnt find the {{runningWithNative}} System variable.
In the second case, when {{-Pnative}} IS specified, it skips because It can't 
load the OpenSSL library.

I'll probably modify it give a reason for skipping.. will also add the check to 
the other tests (I didn't earlier since I they running fine on my mac)

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11007) Reinstate building of ant tasks support

2014-09-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-11007:

Attachment: HADOOP-11007v3.patch

Adding dependency to hadoop-project.

 Reinstate building of ant tasks support
 ---

 Key: HADOOP-11007
 URL: https://issues.apache.org/jira/browse/HADOOP-11007
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, fs
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: HADOOP-11007.patch, HADOOP-11007v2.patch, 
 HADOOP-11007v3.patch


 The ant tasks support from HADOOP-1508 is still present under 
 hadoop-hdfs/src/ant/ but is no longer being built.  It would be nice if this 
 was reinstated in the build and distributed as part of the release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11055) non-daemon pid files are missing

2014-09-05 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123065#comment-14123065
 ] 

John Smith commented on HADOOP-11055:
-

Isn't it safe to write the pid file inside the hadoop_start_daemon function 
since that is the process id of the pid file?

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Priority: Blocker

 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-09-05 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11032:

Attachment: HADOOP-11032.3.patch

 Replace use of Guava Stopwatch with Apache StopWatch
 

 Key: HADOOP-11032
 URL: https://issues.apache.org/jira/browse/HADOOP-11032
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gary Steelman
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-11032.1.patch, HADOOP-11032.2.patch, 
 HADOOP-11032.3.patch, HADOOP-11032.3.patch


 This patch reduces Hadoop's dependency on an old version of guava. 
 Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
 working on use v17. 
 To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
 can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
 provides nearly equivalent functionality. apache.commons.lang is already a 
 dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11049) javax package system class default is too broad

2014-09-05 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11049:
-
Attachment: HADOOP-11049.patch

Fixed the broken unit test.

 javax package system class default is too broad
 ---

 Key: HADOOP-11049
 URL: https://issues.apache.org/jira/browse/HADOOP-11049
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-11049.patch, HADOOP-11049.patch


 The system class default defined in ApplicationClassLoader has javax.. This 
 is too broad. The intent of the system classes is to exempt classes that are 
 provided by the JDK along with hadoop and minimally necessary dependencies 
 that are guaranteed to be on the system classpath. javax. is too broad for 
 that.
 For example, JSR-330 which is part of JavaEE (not JavaSE) has javax.inject. 
 Packages like them should not be declared as system classes, as they will 
 result in ClassNotFoundException if they are needed and present on the user 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-09-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123142#comment-14123142
 ] 

Tsuyoshi OZAWA commented on HADOOP-11032:
-

[~gsteelman], thanks for your suggestions. 

{quote}
Any idea if the build scripts changed and that's the reason we can't find 
SpanReceiverHost? 
{quote}

I think HDFS-7001 is addressing the problem.

{quote}
Second, I think a silent pass or a Log.warn() statement would suffice for 
StopWatch.stop() twice in a row. 
{quote}

Do you mean should we add wrapper class of Stopwatch to handle them? It also 
can be overkill. I think we should it's enough to use Stopwatch class correctly.

{quote}
And finally, if we remove the patch 3 attachment and re-upload it as patch 3 
again, will Jenkins will build with patch 3 again?
{quote}

Yes, Jenkins will :-)


 Replace use of Guava Stopwatch with Apache StopWatch
 

 Key: HADOOP-11032
 URL: https://issues.apache.org/jira/browse/HADOOP-11032
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gary Steelman
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-11032.1.patch, HADOOP-11032.2.patch, 
 HADOOP-11032.3.patch


 This patch reduces Hadoop's dependency on an old version of guava. 
 Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
 working on use v17. 
 To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
 can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
 provides nearly equivalent functionality. apache.commons.lang is already a 
 dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11065) Rat check should exclude **/build/**

2014-09-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-11065:
-

 Summary: Rat check should exclude **/build/**
 Key: HADOOP-11065
 URL: https://issues.apache.org/jira/browse/HADOOP-11065
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker


https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ copies the 
nightly scripts under build/. For the rat-check to pass here, we should exclude 
the build directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11065) Rat check should exclude **/build/**

2014-09-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11065:
--
Attachment: hadoop-11065.patch

Straight-forward change. Tested it locally by creating a directory named build 
and a dummy file without license under it. 

 Rat check should exclude **/build/**
 

 Key: HADOOP-11065
 URL: https://issues.apache.org/jira/browse/HADOOP-11065
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-11065.patch


 https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ copies the 
 nightly scripts under build/. For the rat-check to pass here, we should 
 exclude the build directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11066) Make RAT run at root level instead of each module to generate a single rat.txt

2014-09-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-11066:
-

 Summary: Make RAT run at root level instead of each module to 
generate a single rat.txt
 Key: HADOOP-11066
 URL: https://issues.apache.org/jira/browse/HADOOP-11066
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Karthik Kambatla






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11066) Make RAT run at root level instead of each module to generate a single rat.txt

2014-09-05 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned HADOOP-11066:
--

Assignee: Robert Kanter

 Make RAT run at root level instead of each module to generate a single rat.txt
 --

 Key: HADOOP-11066
 URL: https://issues.apache.org/jira/browse/HADOOP-11066
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Robert Kanter





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11052) hadoop_verify_secure_prereq's results aren't checked in bin/hdfs

2014-09-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11052:
--
Attachment: HADOOP-11052.patch

This is a trivial fix to the problem. Also fixed the bad logic for 
HADOOP_SECURE_COMMAND.

 hadoop_verify_secure_prereq's results aren't checked in bin/hdfs
 

 Key: HADOOP-11052
 URL: https://issues.apache.org/jira/browse/HADOOP-11052
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Priority: Critical
 Attachments: HADOOP-11052.patch


 Just need an else + exit in the secure_service stanza.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11055) non-daemon pid files are missing

2014-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123276#comment-14123276
 ] 

Allen Wittenauer commented on HADOOP-11055:
---

Yup, you are absolutely correct.  There's no point in keeping the pid write 
after the fork since it should write the exact same info anyway.

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Priority: Blocker

 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11007) Reinstate building of ant tasks support

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123287#comment-14123287
 ] 

Hadoop QA commented on HADOOP-11007:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666803/HADOOP-11007v3.patch
  against trunk revision 45efc96.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-ant 
hadoop-tools/hadoop-tools-dist:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4657//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4657//console

This message is automatically generated.

 Reinstate building of ant tasks support
 ---

 Key: HADOOP-11007
 URL: https://issues.apache.org/jira/browse/HADOOP-11007
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, fs
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: HADOOP-11007.patch, HADOOP-11007v2.patch, 
 HADOOP-11007v3.patch


 The ant tasks support from HADOOP-1508 is still present under 
 hadoop-hdfs/src/ant/ but is no longer being built.  It would be nice if this 
 was reinstated in the build and distributed as part of the release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11067) warning message 'ssl.client.truststore.location has not been set' gets printed for hftp command

2014-09-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HDFS-6998 to HADOOP-11067:
--

Key: HADOOP-11067  (was: HDFS-6998)
Project: Hadoop Common  (was: Hadoop HDFS)

 warning message 'ssl.client.truststore.location has not been set' gets 
 printed for hftp command
 ---

 Key: HADOOP-11067
 URL: https://issues.apache.org/jira/browse/HADOOP-11067
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Windows
Reporter: Yesha Vora
Assignee: Xiaoyu Yao
  Labels: newbie
 Attachments: HDFS-6998.0.patch


 hftp command prints 'WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded' in Windows unsecure environment. 
 This issue only exists in Windows environment.
 {noformat}
 hdfs dfs -cat hftp://:50070/user/yesha/1409773968/L1/a.txt
 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 HEllo World..!!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11067) warning message 'ssl.client.truststore.location has not been set' gets printed for hftp command

2014-09-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11067:
---
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks for the contribution Xiaoyu and thanks 
for reviewing Haohui.

 warning message 'ssl.client.truststore.location has not been set' gets 
 printed for hftp command
 ---

 Key: HADOOP-11067
 URL: https://issues.apache.org/jira/browse/HADOOP-11067
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Windows
Reporter: Yesha Vora
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6998.0.patch


 hftp command prints 'WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded' in Windows unsecure environment. 
 This issue only exists in Windows environment.
 {noformat}
 hdfs dfs -cat hftp://:50070/user/yesha/1409773968/L1/a.txt
 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 HEllo World..!!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11052) hadoop_verify_secure_prereq's results aren't checked in bin/hdfs

2014-09-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11052.
---
   Resolution: Fixed
Fix Version/s: 3.0.0

Committed to trunk.

 hadoop_verify_secure_prereq's results aren't checked in bin/hdfs
 

 Key: HADOOP-11052
 URL: https://issues.apache.org/jira/browse/HADOOP-11052
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Priority: Critical
 Fix For: 3.0.0

 Attachments: HADOOP-11052.patch


 Just need an else + exit in the secure_service stanza.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-05 Thread Swapnil Daingade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123342#comment-14123342
 ] 

Swapnil Daingade commented on HADOOP-11044:
---

Looked at the test failures. Not sure if these are directly related to the fix. 
I had submitted the exact same patch earlier. The two tests that had failed 
earlier org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract and 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover have now passed 
and a test that had passed earlier org.apache.hadoop.ipc.TestFairCallQueue has 
now failed. I still cannot access the diffJavacWarnings.txt from the testReport 
to fix the increase in the number of warnings.
It would be great if someone can take a look at this. Please let me know if 
there is something more that I can do to help with the patch.



 FileSystem counters can overflow for large number of readOps, largeReadOps, 
 writeOps
 

 Key: HADOOP-11044
 URL: https://issues.apache.org/jira/browse/HADOOP-11044
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.0, 2.4.1
Reporter: Swapnil Daingade
Priority: Minor
 Attachments: 11044.patch4


 The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
 readOps, largeReadOps, writeOps as int. Also the The 
 org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
 getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
 values can overflow if the exceed 2^31-1 showing negative values. It would be 
 nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2014-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123361#comment-14123361
 ] 

Allen Wittenauer commented on HADOOP-11027:
---

Fixed hadoop_secure_verify_prereq as part of HADOOP-11052.

 HADOOP_SECURE_COMMAND catch-all
 ---

 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical

 Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  Here's a list 
 of issues!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11068) Match hadoop.auth cookie format to jetty output

2014-09-05 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11068:
---

 Summary: Match hadoop.auth cookie format to jetty output
 Key: HADOOP-11068
 URL: https://issues.apache.org/jira/browse/HADOOP-11068
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


See: 
https://issues.apache.org/jira/browse/HADOOP-10911?focusedCommentId=14111626page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14111626

I posted the cookie format that jetty generates, but I attached a version of 
the patch with an older format.  Note, because the tests are pretty 
comprehensive, this cookie format works (it fixes the issue we were having with 
Solr), but it would probably be better to match the format that jetty generates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123419#comment-14123419
 ] 

Hadoop QA commented on HADOOP-11049:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666821/HADOOP-11049.patch
  against trunk revision 45efc96.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.mapred.TestJavaSerialization

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4659//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4659//console

This message is automatically generated.

 javax package system class default is too broad
 ---

 Key: HADOOP-11049
 URL: https://issues.apache.org/jira/browse/HADOOP-11049
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-11049.patch, HADOOP-11049.patch


 The system class default defined in ApplicationClassLoader has javax.. This 
 is too broad. The intent of the system classes is to exempt classes that are 
 provided by the JDK along with hadoop and minimally necessary dependencies 
 that are guaranteed to be on the system classpath. javax. is too broad for 
 that.
 For example, JSR-330 which is part of JavaEE (not JavaSE) has javax.inject. 
 Packages like them should not be declared as system classes, as they will 
 result in ClassNotFoundException if they are needed and present on the user 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11065) Rat check should exclude **/build/**

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123446#comment-14123446
 ] 

Andrew Wang commented on HADOOP-11065:
--

+1

 Rat check should exclude **/build/**
 

 Key: HADOOP-11065
 URL: https://issues.apache.org/jira/browse/HADOOP-11065
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-11065.patch


 https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ copies the 
 nightly scripts under build/. For the rat-check to pass here, we should 
 exclude the build directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-05 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123465#comment-14123465
 ] 

Sangjin Lee commented on HADOOP-11049:
--

The test failure appears unrelated with the patch.

 javax package system class default is too broad
 ---

 Key: HADOOP-11049
 URL: https://issues.apache.org/jira/browse/HADOOP-11049
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-11049.patch, HADOOP-11049.patch


 The system class default defined in ApplicationClassLoader has javax.. This 
 is too broad. The intent of the system classes is to exempt classes that are 
 provided by the JDK along with hadoop and minimally necessary dependencies 
 that are guaranteed to be on the system classpath. javax. is too broad for 
 that.
 For example, JSR-330 which is part of JavaEE (not JavaSE) has javax.inject. 
 Packages like them should not be declared as system classes, as they will 
 result in ClassNotFoundException if they are needed and present on the user 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123475#comment-14123475
 ] 

Hadoop QA commented on HADOOP-11032:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666820/HADOOP-11032.3.patch
  against trunk revision 45efc96.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4658//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4658//console

This message is automatically generated.

 Replace use of Guava Stopwatch with Apache StopWatch
 

 Key: HADOOP-11032
 URL: https://issues.apache.org/jira/browse/HADOOP-11032
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gary Steelman
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-11032.1.patch, HADOOP-11032.2.patch, 
 HADOOP-11032.3.patch, HADOOP-11032.3.patch


 This patch reduces Hadoop's dependency on an old version of guava. 
 Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
 working on use v17. 
 To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
 can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
 provides nearly equivalent functionality. apache.commons.lang is already a 
 dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11069:

Status: Patch Available  (was: Open)

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11069:

Attachment: HADOOP-11069.patch

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11069:
---

 Summary: KMSClientProvider should use getAuthenticationMethod() to 
determine if in proxyuser mode or not
 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch

Currently is checking the login UGI being different from the current UGI, it 
should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11065) Rat check should exclude **/build/**

2014-09-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-11065.
---
   Resolution: Fixed
Fix Version/s: 2.5.1
 Hadoop Flags: Reviewed

 Rat check should exclude **/build/**
 

 Key: HADOOP-11065
 URL: https://issues.apache.org/jira/browse/HADOOP-11065
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11065.patch


 https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ copies the 
 nightly scripts under build/. For the rat-check to pass here, we should 
 exclude the build directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11065) Rat check should exclude **/build/**

2014-09-05 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123493#comment-14123493
 ] 

Karthik Kambatla commented on HADOOP-11065:
---

Thanks for the review, Andrew. Just committed this to trunk through branch-2.5.1

 Rat check should exclude **/build/**
 

 Key: HADOOP-11065
 URL: https://issues.apache.org/jira/browse/HADOOP-11065
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11065.patch


 https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ copies the 
 nightly scripts under build/. For the rat-check to pass here, we should 
 exclude the build directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11070:
---

 Summary: Create MiniKMS for testing
 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


This will facilitate testing HDFS and MR with HDFS encryption fully reproducing 
a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11070:

Attachment: HADOOP-11070.patch

 Create MiniKMS for testing
 --

 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11070.patch


 This will facilitate testing HDFS and MR with HDFS encryption fully 
 reproducing a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11070:

Status: Patch Available  (was: Open)

 Create MiniKMS for testing
 --

 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11070.patch


 This will facilitate testing HDFS and MR with HDFS encryption fully 
 reproducing a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11071:
---

 Summary: KMSClientProvider should drain the local generated EEK 
cache on key rollover
 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor


This is for formal correctness and to enable HDFS EZ to verify a rollover when 
testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11001) Fix test-patch to work with the git repo

2014-09-05 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123502#comment-14123502
 ] 

Todd Lipcon commented on HADOOP-11001:
--

It seems like we're not properly archiving artifacts - eg see 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4855// which had some 
warnings reported in the test-patch output, but doesn't have any archived 
artifacts in order to see what went wrong.

 Fix test-patch to work with the git repo
 

 Key: HADOOP-11001
 URL: https://issues.apache.org/jira/browse/HADOOP-11001
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11001-1.patch, hadoop-11001-2.patch


 We want the precommit builds to run against the git repo after the 
 transition. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11071:

Status: Patch Available  (was: Open)

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11071:

Attachment: HADOOP-11071.patch

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11001) Fix test-patch to work with the git repo

2014-09-05 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123507#comment-14123507
 ] 

Karthik Kambatla commented on HADOOP-11001:
---

Looks like some of the builds were fixed and others weren't. Looking into this. 

 Fix test-patch to work with the git repo
 

 Key: HADOOP-11001
 URL: https://issues.apache.org/jira/browse/HADOOP-11001
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11001-1.patch, hadoop-11001-2.patch


 We want the precommit builds to run against the git repo after the 
 transition. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11001) Fix test-patch to work with the git repo

2014-09-05 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123510#comment-14123510
 ] 

Karthik Kambatla commented on HADOOP-11001:
---

My bad. Mixed it up with another issue post git. Will look into this early next 
week if no one else gets to it by then. 

 Fix test-patch to work with the git repo
 

 Key: HADOOP-11001
 URL: https://issues.apache.org/jira/browse/HADOOP-11001
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11001-1.patch, hadoop-11001-2.patch


 We want the precommit builds to run against the git repo after the 
 transition. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123512#comment-14123512
 ] 

Andrew Wang commented on HADOOP-11069:
--

+1

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123515#comment-14123515
 ] 

Andrew Wang commented on HADOOP-11070:
--

+1

 Create MiniKMS for testing
 --

 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11070.patch


 This will facilitate testing HDFS and MR with HDFS encryption fully 
 reproducing a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11068) Match hadoop.auth cookie format to jetty output

2014-09-05 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11068:

Attachment: HADOOP-11068.patch

Here's a patch that matches the jetty format.  I've written it assuming 
HADOOP-10911 is committed, though I don't see it in the source repo, at least 
on github.  Can you take a look [~tucu00]?

 Match hadoop.auth cookie format to jetty output
 ---

 Key: HADOOP-11068
 URL: https://issues.apache.org/jira/browse/HADOOP-11068
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11068.patch


 See: 
 https://issues.apache.org/jira/browse/HADOOP-10911?focusedCommentId=14111626page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14111626
 I posted the cookie format that jetty generates, but I attached a version of 
 the patch with an older format.  Note, because the tests are pretty 
 comprehensive, this cookie format works (it fixes the issue we were having 
 with Solr), but it would probably be better to match the format that jetty 
 generates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123525#comment-14123525
 ] 

Andrew Wang commented on HADOOP-11071:
--

* Need javadoc on new drain method
* I assume you call getNext() to refill the key cache? This seems like a weird 
thing to hardcode in, do we have to do this?

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11071:

Attachment: HADOOP-11071.patch

new patch adding javadoc to drain(), removing getNext() (you right Andrew) and 
adding testcase.

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch, HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123551#comment-14123551
 ] 

Hadoop QA commented on HADOOP-11070:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666875/HADOOP-11070.patch
  against trunk revision 0571b45.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4661//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4661//console

This message is automatically generated.

 Create MiniKMS for testing
 --

 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11070.patch


 This will facilitate testing HDFS and MR with HDFS encryption fully 
 reproducing a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123559#comment-14123559
 ] 

Andrew Wang commented on HADOOP-11071:
--

Thanks tucu, +1

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch, HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11003) org.apache.hadoop.util.Shell should not take a dependency on binaries being deployed when used as a library

2014-09-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123575#comment-14123575
 ] 

Chris Nauroth commented on HADOOP-11003:


This feels like the behavior should be analogous to what we do in 
{{NativeCodeLoader}}: log once at warn level.  The full stack trace doesn't 
have much value here IMO.

 org.apache.hadoop.util.Shell should not take a dependency on binaries being 
 deployed when used as a library
 ---

 Key: HADOOP-11003
 URL: https://issues.apache.org/jira/browse/HADOOP-11003
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
 Environment: Windows
Reporter: Remus Rusanu

 HIVE-7845 shows how an exception is being thrown when 
 org.apache.hadoop.util.Shell is being used as a library, not as part of a 
 deployed Hadoop environment.
 {code}
 13:20:00 [ERROR pool-2-thread-4 Shell.getWinUtilsPath] Failed to locate the 
 winutils binary in the hadoop binary path
 java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
 Hadoop binaries.
at 
 org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:324)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:339)
at org.apache.hadoop.util.Shell.clinit(Shell.java:332)
at 
 org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:918)
at 
 org.apache.hadoop.hive.conf.HiveConf$ConfVars.clinit(HiveConf.java:228)
 {code}
 There are similar native dependencies (eg. NativeIO and hadoop.dll) that 
 handle lack of binaries with fallback to non-native code paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123582#comment-14123582
 ] 

Hadoop QA commented on HADOOP-11069:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666874/HADOOP-11069.patch
  against trunk revision 0571b45.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.crypto.key.kms.server.TestKMS

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4660//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4660//console

This message is automatically generated.

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123580#comment-14123580
 ] 

Hadoop QA commented on HADOOP-11071:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666877/HADOOP-11071.patch
  against trunk revision 0571b45.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4662//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4662//console

This message is automatically generated.

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch, HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123602#comment-14123602
 ] 

Alejandro Abdelnur commented on HADOOP-11069:
-

test failures are due to 'Too many open files' in the slave

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11067) warning message 'ssl.client.truststore.location has not been set' gets printed for hftp command

2014-09-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123606#comment-14123606
 ] 

Alejandro Abdelnur commented on HADOOP-11067:
-

[~arpitagarwal], i don't see this committed to trunk.

 warning message 'ssl.client.truststore.location has not been set' gets 
 printed for hftp command
 ---

 Key: HADOOP-11067
 URL: https://issues.apache.org/jira/browse/HADOOP-11067
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Windows
Reporter: Yesha Vora
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6998.0.patch


 hftp command prints 'WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded' in Windows unsecure environment. 
 This issue only exists in Windows environment.
 {noformat}
 hdfs dfs -cat hftp://:50070/user/yesha/1409773968/L1/a.txt
 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 HEllo World..!!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11067) warning message 'ssl.client.truststore.location has not been set' gets printed for hftp command

2014-09-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123620#comment-14123620
 ] 

Arpit Agarwal commented on HADOOP-11067:


It was committed as HDFS-6998. Unfortunately I realized after pushing that this 
bug should have been moved to Hadoop common.

I just pushed a trunk commit to update CHANGES.txt.

 warning message 'ssl.client.truststore.location has not been set' gets 
 printed for hftp command
 ---

 Key: HADOOP-11067
 URL: https://issues.apache.org/jira/browse/HADOOP-11067
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Windows
Reporter: Yesha Vora
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6998.0.patch


 hftp command prints 'WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded' in Windows unsecure environment. 
 This issue only exists in Windows environment.
 {noformat}
 hdfs dfs -cat hftp://:50070/user/yesha/1409773968/L1/a.txt
 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 HEllo World..!!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123624#comment-14123624
 ] 

Andrew Wang commented on HADOOP-11069:
--

I opened BUILDS-18 to track this issue, we've been seeing similar test failures 
because of the nofiles ulimit. If this passes locally for you, let's just 
commit it.

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11003) org.apache.hadoop.util.Shell should not take a dependency on binaries being deployed when used as a library

2014-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123625#comment-14123625
 ] 

Allen Wittenauer commented on HADOOP-11003:
---

One of the things I keep meaning to do is to wrap that warn with a way to turn 
it off because on systems that don't have native libs, it's not useful.  So 
keep that in mind too

 org.apache.hadoop.util.Shell should not take a dependency on binaries being 
 deployed when used as a library
 ---

 Key: HADOOP-11003
 URL: https://issues.apache.org/jira/browse/HADOOP-11003
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
 Environment: Windows
Reporter: Remus Rusanu

 HIVE-7845 shows how an exception is being thrown when 
 org.apache.hadoop.util.Shell is being used as a library, not as part of a 
 deployed Hadoop environment.
 {code}
 13:20:00 [ERROR pool-2-thread-4 Shell.getWinUtilsPath] Failed to locate the 
 winutils binary in the hadoop binary path
 java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
 Hadoop binaries.
at 
 org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:324)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:339)
at org.apache.hadoop.util.Shell.clinit(Shell.java:332)
at 
 org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:918)
at 
 org.apache.hadoop.hive.conf.HiveConf$ConfVars.clinit(HiveConf.java:228)
 {code}
 There are similar native dependencies (eg. NativeIO and hadoop.dll) that 
 handle lack of binaries with fallback to non-native code paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11071) KMSClientProvider should drain the local generated EEK cache on key rollover

2014-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123631#comment-14123631
 ] 

Hadoop QA commented on HADOOP-11071:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12666888/HADOOP-11071.patch
  against trunk revision 0571b45.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4663//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4663//console

This message is automatically generated.

 KMSClientProvider should drain the local generated EEK cache on key rollover
 

 Key: HADOOP-11071
 URL: https://issues.apache.org/jira/browse/HADOOP-11071
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Minor
 Attachments: HADOOP-11071.patch, HADOOP-11071.patch


 This is for formal correctness and to enable HDFS EZ to verify a rollover 
 when testing with KMS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123645#comment-14123645
 ] 

Alejandro Abdelnur commented on HADOOP-10758:
-

+1

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11035) distcp on mr1(branch-1) fails with NPE using a short relative source path.

2014-09-05 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123915#comment-14123915
 ] 

Yongjun Zhang commented on HADOOP-11035:


Hi Zhihai, 

Thanks for finding the issue and the patch. The patch looks good to me. Couple 
of minor comments about the test: 

- can we add a comment to describe the resulted command line? the string 
manipulation (substr, +1, etc) in the test doesn't quickly tell how the 
resulted command line looks like. 
- add couple of empty lines to the test method, to separate the functionality 
of each block: initialization, run distcp, cleanup etc.

Thanks.



 distcp on mr1(branch-1) fails with NPE using a short relative source path.
 --

 Key: HADOOP-11035
 URL: https://issues.apache.org/jira/browse/HADOOP-11035
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-11035.000.patch


 distcp on mr1(branch-1) fails with NPE using a short relative source path. 
 The failure is at DistCp.java, makeRelative return null at the following code:
 The parameters passed to makeRelative are not same format:
 root is relative path and child.getPath() is a full path.
 {code}
 final String dst = makeRelative(root, child.getPath());
 {code}
 The solution is 
 change root to full path to match child.getPath().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11072) better Logging in DNS.java

2014-09-05 Thread jay vyas (JIRA)
jay vyas created HADOOP-11072:
-

 Summary: better Logging in  DNS.java
 Key: HADOOP-11072
 URL: https://issues.apache.org/jira/browse/HADOOP-11072
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0, 2.3.0
Reporter: jay vyas
Priority: Minor


The DNS.java class should be more informative, and possibly fail early when 
reverse  DNS is broken.  Right now it is vulnerable to a cryptic ArrayIndex 
exception.

{noformat}
parts = hostIp.getHostAddress()
String reverseIp = parts[3] + parts[2] ... 
{noformat}

During this patch I think we can also improve a couple other minor logging 
statements in the net package.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11072) better Logging in DNS.java

2014-09-05 Thread jay vyas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jay vyas updated HADOOP-11072:
--
Component/s: net

 better Logging in  DNS.java
 ---

 Key: HADOOP-11072
 URL: https://issues.apache.org/jira/browse/HADOOP-11072
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.3.0, 2.4.0
Reporter: jay vyas
Priority: Minor
  Labels: logging

 The DNS.java class should be more informative, and possibly fail early when 
 reverse  DNS is broken.  Right now it is vulnerable to a cryptic ArrayIndex 
 exception.
 {noformat}
 parts = hostIp.getHostAddress()
 String reverseIp = parts[3] + parts[2] ... 
 {noformat}
 During this patch I think we can also improve a couple other minor logging 
 statements in the net package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11072) better Logging in DNS.java

2014-09-05 Thread jay vyas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jay vyas updated HADOOP-11072:
--
Labels: logging  (was: )

 better Logging in  DNS.java
 ---

 Key: HADOOP-11072
 URL: https://issues.apache.org/jira/browse/HADOOP-11072
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.3.0, 2.4.0
Reporter: jay vyas
Priority: Minor
  Labels: logging

 The DNS.java class should be more informative, and possibly fail early when 
 reverse  DNS is broken.  Right now it is vulnerable to a cryptic ArrayIndex 
 exception.
 {noformat}
 parts = hostIp.getHostAddress()
 String reverseIp = parts[3] + parts[2] ... 
 {noformat}
 During this patch I think we can also improve a couple other minor logging 
 statements in the net package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11072) better Logging in DNS.java

2014-09-05 Thread jay vyas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jay vyas updated HADOOP-11072:
--
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-3619

 better Logging in  DNS.java
 ---

 Key: HADOOP-11072
 URL: https://issues.apache.org/jira/browse/HADOOP-11072
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.3.0, 2.4.0
Reporter: jay vyas
Priority: Minor
  Labels: logging

 The DNS.java class should be more informative, and possibly fail early when 
 reverse  DNS is broken.  Right now it is vulnerable to a cryptic ArrayIndex 
 exception.
 {noformat}
 parts = hostIp.getHostAddress()
 String reverseIp = parts[3] + parts[2] ... 
 {noformat}
 During this patch I think we can also improve a couple other minor logging 
 statements in the net package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11072) better Logging in DNS.java

2014-09-05 Thread jay vyas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124255#comment-14124255
 ] 

jay vyas commented on HADOOP-11072:
---

I made this a subtask of HADOOP-3619 because i think probably the logging is an 
essential component of any patch which aims to fix DNS related stuff in DNS.java

 better Logging in  DNS.java
 ---

 Key: HADOOP-11072
 URL: https://issues.apache.org/jira/browse/HADOOP-11072
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.3.0, 2.4.0
Reporter: jay vyas
Priority: Minor
  Labels: logging

 The DNS.java class should be more informative, and possibly fail early when 
 reverse  DNS is broken.  Right now it is vulnerable to a cryptic ArrayIndex 
 exception.
 {noformat}
 parts = hostIp.getHostAddress()
 String reverseIp = parts[3] + parts[2] ... 
 {noformat}
 During this patch I think we can also improve a couple other minor logging 
 statements in the net package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11070) Create MiniKMS for testing

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11070:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

 Create MiniKMS for testing
 --

 Key: HADOOP-11070
 URL: https://issues.apache.org/jira/browse/HADOOP-11070
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11070.patch


 This will facilitate testing HDFS and MR with HDFS encryption fully 
 reproducing a real deployment setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11069) KMSClientProvider should use getAuthenticationMethod() to determine if in proxyuser mode or not

2014-09-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11069:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

 KMSClientProvider should use getAuthenticationMethod() to determine if in 
 proxyuser mode or not
 ---

 Key: HADOOP-11069
 URL: https://issues.apache.org/jira/browse/HADOOP-11069
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-11069.patch


 Currently is checking the login UGI being different from the current UGI, it 
 should check if the current UGI auth method is PROXY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)