[jira] [Commented] (HADOOP-11091) Eliminate old configuration parameter names from s3a

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135313#comment-14135313
 ] 

Hudson commented on HADOOP-11091:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #682 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/682/])
HADOOP-11091. Eliminate old configuration parameter names from s3a (dsw via 
cmccabe) (cmccabe: rev 0ac760a58d96b36ab30e9d60679bbea6365ef120)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


 Eliminate old configuration parameter names from s3a
 

 Key: HADOOP-11091
 URL: https://issues.apache.org/jira/browse/HADOOP-11091
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: David S. Wang
Assignee: David S. Wang
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11091-1.patch


 This JIRA is to track eliminating the configuration parameter names with 
 old in the name from s3a.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135315#comment-14135315
 ] 

Hudson commented on HADOOP-10868:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #682 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/682/])
HADOOP-10868. AuthenticationFilter should support externalizing the secret for 
signing and provide rotation support. (rkanter via tucu) (tucu: rev 
932ae036acb96634c5dd435d57ba02ce4d5e8918)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRolloverSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestJaasConfiguration.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestStringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
* hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java
HADOOP-10868. Addendum (tucu: rev 7e08c0f23f58aa143f0997f2472e8051175142e9)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java


 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868_addendum.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10400) Incorporate new S3A FileSystem implementation

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135317#comment-14135317
 ] 

Hudson commented on HADOOP-10400:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #682 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/682/])
HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by 
Jordan Mendelson and Dave Wang. (atm: rev 
24d920b80eb3626073925a1d0b6dcf148add8cc0)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/S3AContract.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRootDir.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractMkdir.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractOpen.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
* hadoop-tools/hadoop-aws/pom.xml
* hadoop-tools/hadoop-azure/pom.xml
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AnonymousAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractDelete.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractCreate.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3n.xml
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java


 Incorporate new S3A FileSystem implementation
 -

 Key: HADOOP-10400
 URL: https://issues.apache.org/jira/browse/HADOOP-10400
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, fs/s3
Affects Versions: 2.4.0
Reporter: Jordan Mendelson
Assignee: Jordan Mendelson
 Fix For: 2.6.0

 Attachments: HADOOP-10400-1.patch, HADOOP-10400-2.patch, 
 HADOOP-10400-3.patch, HADOOP-10400-4.patch, HADOOP-10400-5.patch, 
 HADOOP-10400-6.patch, HADOOP-10400-7.patch, HADOOP-10400-8-branch-2.patch, 
 HADOOP-10400-8.patch, HADOOP-10400-branch-2.patch


 The s3native filesystem has a number of limitations (some of which were 
 recently fixed by HADOOP-9454). This patch adds an s3a filesystem which uses 
 the aws-sdk instead of the jets3t library. There are a number of improvements 
 over s3native including:
 - Parallel copy (rename) support (dramatically speeds up commits on large 
 files)
 - AWS S3 explorer compatible empty directories files xyz/ instead of 
 xyz_$folder$ (reduces littering)
 - Ignores s3native created _$folder$ files created by s3native and other S3 
 browsing utilities
 - Supports multiple output buffer dirs to even out IO when uploading files
 - Supports IAM role-based authentication
 - Allows setting a default canned ACL for uploads (public, private, etc.)
 - Better error recovery handling
 - Should handle input seeks without having to download the whole file (used 
 for splits a lot)
 This code is a copy of https://github.com/Aloisius/hadoop-s3a with patches to 
 various pom files to get it to build against trunk. I've been using 0.0.1 in 
 production with CDH 4 for several months and CDH 5 for a few days. The 
 version here is 0.0.2 which changes around some keys to hopefully bring the 
 key name style more inline with the rest of hadoop 2.x.
 *Tunable parameters:*
 fs.s3a.access.key - Your AWS access key ID (omit for role authentication)
 fs.s3a.secret.key - Your AWS secret key (omit for role authentication)
 fs.s3a.connection.maximum - Controls how many parallel connections 
 HttpClient spawns (default: 15)
 fs.s3a.connection.ssl.enabled - Enables or disables SSL connections to S3 
 (default: true)
 fs.s3a.attempts.maximum - How many times we should retry commands on 

[jira] [Created] (HADOOP-11095) How about Null check when closing inputstream object in JavaKeyStoreProvider#() ?

2014-09-16 Thread skrho (JIRA)
skrho created HADOOP-11095:
--

 Summary: How about Null check when closing inputstream object in 
JavaKeyStoreProvider#() ?
 Key: HADOOP-11095
 URL: https://issues.apache.org/jira/browse/HADOOP-11095
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: skrho
Priority: Minor


In the finally block:
  InputStream is = pwdFile.openStream();
  try {
password = IOUtils.toCharArray(is);
  } finally {
is.close();
  }
  
How about Null check when closing inputstream object?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11095) How about Null check when closing inputstream object in JavaKeyStoreProvider#() ?

2014-09-16 Thread skrho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

skrho updated HADOOP-11095:
---
Attachment: HADOOP-11095_001.patch

I added null checking logic in patch~~

 How about Null check when closing inputstream object in 
 JavaKeyStoreProvider#() ?
 -

 Key: HADOOP-11095
 URL: https://issues.apache.org/jira/browse/HADOOP-11095
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: skrho
Priority: Minor
 Attachments: HADOOP-11095_001.patch


 In the finally block:
   InputStream is = pwdFile.openStream();
   try {
 password = IOUtils.toCharArray(is);
   } finally {
 is.close();
   }
   
 How about Null check when closing inputstream object?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2014-09-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135405#comment-14135405
 ] 

Steve Loughran commented on HADOOP-9438:


...once we've switched to java 7 we can use the improved java IO methods which 
will do the test+fail in one go (possibly even atomically)

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135482#comment-14135482
 ] 

Hudson commented on HADOOP-10868:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1873 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1873/])
HADOOP-10868. AuthenticationFilter should support externalizing the secret for 
signing and provide rotation support. (rkanter via tucu) (tucu: rev 
932ae036acb96634c5dd435d57ba02ce4d5e8918)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestJaasConfiguration.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestStringSignerSecretProvider.java
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRolloverSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
HADOOP-10868. Addendum (tucu: rev 7e08c0f23f58aa143f0997f2472e8051175142e9)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java


 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868_addendum.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11091) Eliminate old configuration parameter names from s3a

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135480#comment-14135480
 ] 

Hudson commented on HADOOP-11091:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1873 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1873/])
HADOOP-11091. Eliminate old configuration parameter names from s3a (dsw via 
cmccabe) (cmccabe: rev 0ac760a58d96b36ab30e9d60679bbea6365ef120)
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Eliminate old configuration parameter names from s3a
 

 Key: HADOOP-11091
 URL: https://issues.apache.org/jira/browse/HADOOP-11091
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: David S. Wang
Assignee: David S. Wang
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11091-1.patch


 This JIRA is to track eliminating the configuration parameter names with 
 old in the name from s3a.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10400) Incorporate new S3A FileSystem implementation

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135484#comment-14135484
 ] 

Hudson commented on HADOOP-10400:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1873 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1873/])
HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by 
Jordan Mendelson and Dave Wang. (atm: rev 
24d920b80eb3626073925a1d0b6dcf148add8cc0)
* hadoop-tools/hadoop-azure/pom.xml
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractDelete.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/S3AContract.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractMkdir.java
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AnonymousAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractOpen.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
* hadoop-tools/hadoop-aws/pom.xml
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3n.xml
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractCreate.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRootDir.java
* hadoop-project/pom.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java


 Incorporate new S3A FileSystem implementation
 -

 Key: HADOOP-10400
 URL: https://issues.apache.org/jira/browse/HADOOP-10400
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, fs/s3
Affects Versions: 2.4.0
Reporter: Jordan Mendelson
Assignee: Jordan Mendelson
 Fix For: 2.6.0

 Attachments: HADOOP-10400-1.patch, HADOOP-10400-2.patch, 
 HADOOP-10400-3.patch, HADOOP-10400-4.patch, HADOOP-10400-5.patch, 
 HADOOP-10400-6.patch, HADOOP-10400-7.patch, HADOOP-10400-8-branch-2.patch, 
 HADOOP-10400-8.patch, HADOOP-10400-branch-2.patch


 The s3native filesystem has a number of limitations (some of which were 
 recently fixed by HADOOP-9454). This patch adds an s3a filesystem which uses 
 the aws-sdk instead of the jets3t library. There are a number of improvements 
 over s3native including:
 - Parallel copy (rename) support (dramatically speeds up commits on large 
 files)
 - AWS S3 explorer compatible empty directories files xyz/ instead of 
 xyz_$folder$ (reduces littering)
 - Ignores s3native created _$folder$ files created by s3native and other S3 
 browsing utilities
 - Supports multiple output buffer dirs to even out IO when uploading files
 - Supports IAM role-based authentication
 - Allows setting a default canned ACL for uploads (public, private, etc.)
 - Better error recovery handling
 - Should handle input seeks without having to download the whole file (used 
 for splits a lot)
 This code is a copy of https://github.com/Aloisius/hadoop-s3a with patches to 
 various pom files to get it to build against trunk. I've been using 0.0.1 in 
 production with CDH 4 for several months and CDH 5 for a few days. The 
 version here is 0.0.2 which changes around some keys to hopefully bring the 
 key name style more inline with the rest of hadoop 2.x.
 *Tunable parameters:*
 fs.s3a.access.key - Your AWS access key ID (omit for role authentication)
 fs.s3a.secret.key - Your AWS secret key (omit for role authentication)
 fs.s3a.connection.maximum - Controls how many parallel connections 
 HttpClient spawns (default: 15)
 fs.s3a.connection.ssl.enabled - Enables or disables SSL connections to S3 
 (default: true)
 fs.s3a.attempts.maximum - How many times we should retry commands 

[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135490#comment-14135490
 ] 

Hadoop QA commented on HADOOP-9438:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12583986/HADOOP-9438.20130521.1.patch
  against trunk revision 7e08c0f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4737//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4737//console

This message is automatically generated.

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2014-09-16 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135581#comment-14135581
 ] 

Yongjun Zhang commented on HADOOP-11045:


I checked PreCommit-HDFS-Build, and here is the result. It says 
testPipelineRecoveryStress is the topmost (HDFS-6694), and without solving it, 
we might hide some real problem.

The second and the third tests in the list below failed for the similar reason 
Too many open files It's suspicious because this is not the case before. 
Some code change might have introduced this problem recently (just filed 
HDFS-7070).

{code}
Recently FAILED builds in url: 
https://builds.apache.org//job/PreCommit-HDFS-Build
THERE ARE 18 builds (out of 20) that have failed tests in the past 3 days, 
as listed below:
..
Among 20 runs examined, all failed tests #failedRuns: testName:
8: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testPipelineRecoveryStress
6: org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testResponseCode
2: 
org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testRenameDirToSelf
2: 
org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth
2: 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen.testFsIsEncrypted
2: 
org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testOverWriteAndRead
2: 
org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testOutputStreamClosedTwice
2: 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen.testFsIsEncrypted
2: 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored
2: org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testSeek
1: org.apache.hadoop.hdfs.TestDFSShell.testGet
1: org.apache.hadoop.hdfs.TestDFSUpgrade.testUpgrade
1: org.apache.hadoop.fs.TestFsShellCopy.testCopyNoCrc
1: org.apache.hadoop.crypto.key.TestValueQueue.testgetAtMostPolicyALL
1: org.apache.hadoop.hdfs.TestDFSShell.testCopyToLocal
..
{code}



 Introducing a tool to detect flaky tests of hadoop jenkins test job
 ---

 Key: HADOOP-11045
 URL: https://issues.apache.org/jira/browse/HADOOP-11045
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, tools
Affects Versions: 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch


 File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
 test jobs. Certainly it can be adapted to projects other than hadoop.
 I developed the tool on top of some initial work [~tlipcon] did. We find it 
 quite useful. With Todd's agreement, I'd like to push it to upstream so all 
 of us can share (thanks Todd for the initial work and support). I hope you 
 find the tool useful too.
 The idea is, when one has the need to see if the test failure s/he is seeing 
 in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
 good idea. Also, if one wants to look at the failure trend of a testcase in a 
 given jenkins job, the tool can be used too. I hope people find it useful.
 This tool is for hadoop contributors rather than hadoop users. Thanks 
 [~tedyu] for the advice to put to dev-support dir.
 Description of the tool:
 {code}
 #
 # Given a jenkins test job, this script examines all runs of the job done
 # within specified period of time (number of days prior to the execution
 # time of this script), and reports all failed tests.
 #
 # The output of this script includes a section for each run that has failed
 # tests, with each failed test name listed.
 #
 # More importantly, at the end, it outputs a summary section to list all 
 failed
 # tests within all examined runs, and indicate how many runs a same test
 # failed, and sorted all failed tests by how many runs each test failed in.
 #
 # This way, when we see failed tests in PreCommit build, we can quickly tell 
 # whether a failed test is a new failure or it failed before, and it may just 
 # be a flaky test.
 #
 # Of course, to be 100% sure about the reason of a failed test, closer look 
 # at the failed test for the specific run is necessary.
 #
 {code}
 How to use the tool:
 {code}
 Usage: determine-flaky-tests-hadoop.py [options]
 Options:
   -h, --helpshow this help message and exit
   -J JENKINS_URL, --jenkins-url=JENKINS_URL
 Jenkins URL
   -j JOB_NAME, --job-name=JOB_NAME
 Job name to look at
   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
 Number of days to examine
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-09-16 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135675#comment-14135675
 ] 

Larry McCay commented on HADOOP-10922:
--

[~andrew.wang]?

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch, HADOOP-10922-2.patch, 
 HADOOP-10922-3.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes

2014-09-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135696#comment-14135696
 ] 

Chris Nauroth commented on HADOOP-11064:


[~cmccabe], my understanding is that we need to restore 
{{nativeVerifyChunkedSums}}.  (See method signature in Steve's comment from 
05/Sep/14.)

 UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 
 method changes
 --

 Key: HADOOP-11064
 URL: https://issues.apache.org/jira/browse/HADOOP-11064
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
 Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
 2.4 JARs
Reporter: Steve Loughran
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, 
 HADOOP-11064.003.patch, HADOOP-11064.004.patch


 The private native method names and signatures in {{NativeCrc32}} were 
 changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed 
 link errors when they try to perform checksums. 
 This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
 rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-16 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135791#comment-14135791
 ] 

Robert Kanter commented on HADOOP-10868:


{quote}On a related note, have you considered having a testcase using minikdc 
to exercise the ZK SASL client?{quote}
I had created a test case in Oozie for this, but it's very brittle and a little 
hacky (it's actually broken right now after upgrading ZooKeeper/Curator: 
OOZIE-1959); it also requires it's on JVM so it doesn't mess with other tests.  
The problem is basically that we're setting system properties and logging in at 
that level, and there doesn't appear to be a way to logout.  I would normally 
have made a test like this, but for these problems it's not worth it IMO.  

 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868_addendum.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6590) Add a username check in hadoop-daemon.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6590:
-
Attachment: HADOOP-6590-02.patch

This patch will allow an admin to define HADOOP_command_USER to the user that 
should only be allowed to execute that command.

For example, HADOOP_namenode_USER=hdfs will prevent the namenode from running 
by anyone but the hdfs user.

 Add a username check in hadoop-daemon.sh
 

 Key: HADOOP-6590
 URL: https://issues.apache.org/jira/browse/HADOOP-6590
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Attachments: HADOOP-6590-02.patch, HADOOP-6590-2010-07-12.txt, 
 HADOOP-6590.patch


 We experienced a case that sometimes we accidentally started HDFS or 
 MAPREDUCE with root user. Then the directory permission will be modified and 
 we have to chown them. It will be nice if there can be a username checking in 
 the hadoop-daemon.sh script so that we always start with the desired username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11091) Eliminate old configuration parameter names from s3a

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135935#comment-14135935
 ] 

Hudson commented on HADOOP-11091:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1898 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1898/])
HADOOP-11091. Eliminate old configuration parameter names from s3a (dsw via 
cmccabe) (cmccabe: rev 0ac760a58d96b36ab30e9d60679bbea6365ef120)
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


 Eliminate old configuration parameter names from s3a
 

 Key: HADOOP-11091
 URL: https://issues.apache.org/jira/browse/HADOOP-11091
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: David S. Wang
Assignee: David S. Wang
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11091-1.patch


 This JIRA is to track eliminating the configuration parameter names with 
 old in the name from s3a.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10400) Incorporate new S3A FileSystem implementation

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135939#comment-14135939
 ] 

Hudson commented on HADOOP-10400:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1898 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1898/])
HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by 
Jordan Mendelson and Dave Wang. (atm: rev 
24d920b80eb3626073925a1d0b6dcf148add8cc0)
* hadoop-tools/hadoop-aws/pom.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AnonymousAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractOpen.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
* hadoop-tools/hadoop-azure/pom.xml
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3n.xml
* 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRootDir.java
* hadoop-project/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractMkdir.java
* hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractCreate.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractDelete.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/S3AContract.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


 Incorporate new S3A FileSystem implementation
 -

 Key: HADOOP-10400
 URL: https://issues.apache.org/jira/browse/HADOOP-10400
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, fs/s3
Affects Versions: 2.4.0
Reporter: Jordan Mendelson
Assignee: Jordan Mendelson
 Fix For: 2.6.0

 Attachments: HADOOP-10400-1.patch, HADOOP-10400-2.patch, 
 HADOOP-10400-3.patch, HADOOP-10400-4.patch, HADOOP-10400-5.patch, 
 HADOOP-10400-6.patch, HADOOP-10400-7.patch, HADOOP-10400-8-branch-2.patch, 
 HADOOP-10400-8.patch, HADOOP-10400-branch-2.patch


 The s3native filesystem has a number of limitations (some of which were 
 recently fixed by HADOOP-9454). This patch adds an s3a filesystem which uses 
 the aws-sdk instead of the jets3t library. There are a number of improvements 
 over s3native including:
 - Parallel copy (rename) support (dramatically speeds up commits on large 
 files)
 - AWS S3 explorer compatible empty directories files xyz/ instead of 
 xyz_$folder$ (reduces littering)
 - Ignores s3native created _$folder$ files created by s3native and other S3 
 browsing utilities
 - Supports multiple output buffer dirs to even out IO when uploading files
 - Supports IAM role-based authentication
 - Allows setting a default canned ACL for uploads (public, private, etc.)
 - Better error recovery handling
 - Should handle input seeks without having to download the whole file (used 
 for splits a lot)
 This code is a copy of https://github.com/Aloisius/hadoop-s3a with patches to 
 various pom files to get it to build against trunk. I've been using 0.0.1 in 
 production with CDH 4 for several months and CDH 5 for a few days. The 
 version here is 0.0.2 which changes around some keys to hopefully bring the 
 key name style more inline with the rest of hadoop 2.x.
 *Tunable parameters:*
 fs.s3a.access.key - Your AWS access key ID (omit for role authentication)
 fs.s3a.secret.key - Your AWS secret key (omit for role authentication)
 fs.s3a.connection.maximum - Controls how many parallel connections 
 HttpClient spawns (default: 15)
 fs.s3a.connection.ssl.enabled - Enables or disables SSL connections to S3 
 (default: true)
 fs.s3a.attempts.maximum - How many times we should retry 

[jira] [Commented] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135937#comment-14135937
 ] 

Hudson commented on HADOOP-10868:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1898 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1898/])
HADOOP-10868. AuthenticationFilter should support externalizing the secret for 
signing and provide rotation support. (rkanter via tucu) (tucu: rev 
932ae036acb96634c5dd435d57ba02ce4d5e8918)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRolloverSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestJaasConfiguration.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* hadoop-common-project/hadoop-auth/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestStringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestRandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestSigner.java
* hadoop-project/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RandomSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10868. Addendum (tucu: rev 7e08c0f23f58aa143f0997f2472e8051175142e9)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java


 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868_addendum.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6590) Add a username check in hadoop-daemon.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6590:
-
Affects Version/s: 3.0.0

 Add a username check in hadoop-daemon.sh
 

 Key: HADOOP-6590
 URL: https://issues.apache.org/jira/browse/HADOOP-6590
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.22.0, 3.0.0
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Attachments: HADOOP-6590-02.patch, HADOOP-6590-2010-07-12.txt, 
 HADOOP-6590.patch


 We experienced a case that sometimes we accidentally started HDFS or 
 MAPREDUCE with root user. Then the directory permission will be modified and 
 we have to chown them. It will be nice if there can be a username checking in 
 the hadoop-daemon.sh script so that we always start with the desired username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6590) Add a username check in hadoop-daemon.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6590:
-
Status: Patch Available  (was: Open)

 Add a username check in hadoop-daemon.sh
 

 Key: HADOOP-6590
 URL: https://issues.apache.org/jira/browse/HADOOP-6590
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.22.0
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Attachments: HADOOP-6590-02.patch, HADOOP-6590-2010-07-12.txt, 
 HADOOP-6590.patch


 We experienced a case that sometimes we accidentally started HDFS or 
 MAPREDUCE with root user. Then the directory permission will be modified and 
 we have to chown them. It will be nice if there can be a username checking in 
 the hadoop-daemon.sh script so that we always start with the desired username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-6590) Add a username check in hadoop-daemon.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-6590:


Assignee: Allen Wittenauer  (was: Scott Chen)

 Add a username check in hadoop-daemon.sh
 

 Key: HADOOP-6590
 URL: https://issues.apache.org/jira/browse/HADOOP-6590
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.22.0, 3.0.0
Reporter: Scott Chen
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: HADOOP-6590-02.patch, HADOOP-6590-2010-07-12.txt, 
 HADOOP-6590.patch


 We experienced a case that sometimes we accidentally started HDFS or 
 MAPREDUCE with root user. Then the directory permission will be modified and 
 we have to chown them. It will be nice if there can be a username checking in 
 the hadoop-daemon.sh script so that we always start with the desired username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-16 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu reassigned HADOOP-10714:


Assignee: Juan Yu  (was: David S. Wang)

 AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
 --

 Key: HADOOP-10714
 URL: https://issues.apache.org/jira/browse/HADOOP-10714
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.5.0
Reporter: David S. Wang
Assignee: Juan Yu
Priority: Critical
  Labels: s3
 Attachments: HADOOP-10714-1.patch


 In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
 to have the number of entries at 1000 or below. Otherwise we get a Malformed 
 XML error similar to:
 com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
 Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
 MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
 did not validate against our published schema, S3 Extended Request ID: 
 DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
 at 
 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
 at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
 at 
 com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
 at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
 Note that this is mentioned in the AWS documentation:
 http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
 The Multi-Object Delete request contains a list of up to 1000 keys that you 
 want to delete. In the XML, you provide the object key names, and optionally, 
 version IDs if you want to delete a specific version of the object from a 
 versioning-enabled bucket. For each key, Amazon S3….”
 Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
 problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11075) hadoop-kms is not being published to maven

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HADOOP-11075.
-
Resolution: Duplicate

Integrating this into HDFS-7006, will give credit to 
[~anthony.young-gar...@cloudera.com] for it on commit.

 hadoop-kms is not being published to maven
 --

 Key: HADOOP-11075
 URL: https://issues.apache.org/jira/browse/HADOOP-11075
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Anthony Young-Garner
Assignee: Anthony Young-Garner
Priority: Minor
 Attachments: 0001-Exposed-hadoop-kms-classes-on-hadoop-KMS-POM.patch, 
 HADOOP-11075.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur moved HDFS-7074 to HADOOP-11096:
---

  Component/s: (was: security)
   security
 Target Version/s:   (was: 2.6.0)
Affects Version/s: (was: 2.6.0)
   2.6.0
  Key: HADOOP-11096  (was: HDFS-7074)
  Project: Hadoop Common  (was: Hadoop HDFS)

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11096:

Target Version/s: 2.6.0
Assignee: Arun Suresh  (was: Alejandro Abdelnur)

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh

 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-11096:
---

Assignee: Alejandro Abdelnur  (was: Arun Suresh)

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11096:

Attachment: HADOOP-11096.patch

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11096.patch


 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11055) non-daemon pid files are missing

2014-09-16 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136172#comment-14136172
 ] 

Owen O'Malley commented on HADOOP-11055:


+1 looks good, Allen.

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11055.patch


 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136200#comment-14136200
 ] 

Andrew Wang commented on HADOOP-11096:
--

+1 another nice one, thanks Tucu

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11096.patch


 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11055) non-daemon pid files are missing

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11055:
--
Attachment: HADOOP-11055-01.patch

-01 is just some syntax cleanup.

Thanks Owen! I'll commit this in a bit.

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11055-01.patch, HADOOP-11055.patch


 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11055) non-daemon pid files are missing

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11055:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11055-01.patch, HADOOP-11055.patch


 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11096) KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the keyname on decrypt

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11096:

Status: Patch Available  (was: Open)

 KMS: KeyAuthorizationKeyProvider should verify the keyversion belongs to the 
 keyname on decrypt
 ---

 Key: HADOOP-11096
 URL: https://issues.apache.org/jira/browse/HADOOP-11096
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11096.patch


 when decrypting a EEK the {{KeyAuthorizationKeyProvider}} must  verify the 
 keyversionname belongs to the keyname.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params

2014-09-16 Thread Charles Lamb (JIRA)
Charles Lamb created HADOOP-11097:
-

 Summary: kms docs say proxyusers, not proxyuser for config params
 Key: HADOOP-11097
 URL: https://issues.apache.org/jira/browse/HADOOP-11097
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial


The KMS docs have the proxy parameters named proxyusers, not proxyuser.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params

2014-09-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11097:
--
Attachment: HADOOP-11097.001.patch

Here's a patch that corrects the parameters.


 kms docs say proxyusers, not proxyuser for config params
 

 Key: HADOOP-11097
 URL: https://issues.apache.org/jira/browse/HADOOP-11097
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11097.001.patch


 The KMS docs have the proxy parameters named proxyusers, not proxyuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params

2014-09-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11097 started by Charles Lamb.
-
 kms docs say proxyusers, not proxyuser for config params
 

 Key: HADOOP-11097
 URL: https://issues.apache.org/jira/browse/HADOOP-11097
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11097.001.patch


 The KMS docs have the proxy parameters named proxyusers, not proxyuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params

2014-09-16 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136347#comment-14136347
 ] 

Charles Lamb commented on HADOOP-11097:
---

No tests are required -- it's a simple doc change.


 kms docs say proxyusers, not proxyuser for config params
 

 Key: HADOOP-11097
 URL: https://issues.apache.org/jira/browse/HADOOP-11097
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11097.001.patch


 The KMS docs have the proxy parameters named proxyusers, not proxyuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11055) non-daemon pid files are missing

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136350#comment-14136350
 ] 

Hadoop QA commented on HADOOP-11055:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669202/HADOOP-11055-01.patch
  against trunk revision 56119fe.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4739//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4739//console

This message is automatically generated.

 non-daemon pid files are missing
 

 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11055-01.patch, HADOOP-11055.patch


 Somewhere along the way, daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11022) User replaced functions get lost 2-3 levels deep (e.g., sbin)

2014-09-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136375#comment-14136375
 ] 

Allen Wittenauer commented on HADOOP-11022:
---

Self +1 this one, after having a discussion with another committer about the 
changes (esp since I'm adding another config file)

 User replaced functions get lost 2-3 levels deep (e.g., sbin)
 -

 Key: HADOOP-11022
 URL: https://issues.apache.org/jira/browse/HADOOP-11022
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Critical
 Attachments: HADOOP-11022.patch


 The code that protects hadoop-env.sh from being re-executed is also causing 
 functions that the user replaced to get overridden with the defaults.  This 
 typically happens when running commands that nest, such as most of the 
 content in sbin.  Just running stuff out of bin (e.g., bin/hdfs --daemon 
 start namenode) does not trigger this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9384) Update S3 native fs implementation to use AWS SDK to support authorization through roles

2014-09-16 Thread David S. Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136386#comment-14136386
 ] 

David S. Wang commented on HADOOP-9384:
---

+1 to [~ste...@apache.org]'s comment

 Update S3 native fs implementation to use AWS SDK to support authorization 
 through roles
 

 Key: HADOOP-9384
 URL: https://issues.apache.org/jira/browse/HADOOP-9384
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
 Environment: Locally: RHEL 6, AWS S3
 Remotely: AWS EC2 (RHEL 6), AWS S3
Reporter: D. Granit
Priority: Minor
 Attachments: HADOOP-9384-v2.patch, HADOOP-9384.patch


 Currently the S3 native implementation 
 {{org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore}} requires 
 credentials to be set explicitly. Amazon allows setting credentials for 
 instances instead of users, via roles. Such are rotated frequently and kept 
 in a local cache all of which is handled by the AWS SDK in this case the 
 {{AmazonS3Client}}. The SDK follows a specific order to establish whether 
 credentials are set explicitly or via a role:
 - Environment Variables: AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
 - Java System Properties: aws.accessKeyId and aws.secretKey
 - Instance Metadata Service, which provides the credentials associated with 
 the IAM role for the EC2 instance
 as seen in 
 http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html
 To support this feature the current {{NativeFileSystemStore}} implementation 
 needs to be altered to use the AWS SDK instead of the JetS3t S3 libraries.
 A request for this feature has previously been raised as part of the Flume 
 project (FLUME-1691) where the HDFS on top of S3 implementation is used as a 
 manner of logging into S3 via an HDFS Sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params

2014-09-16 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136397#comment-14136397
 ] 

Alejandro Abdelnur commented on HADOOP-11097:
-

+1

 kms docs say proxyusers, not proxyuser for config params
 

 Key: HADOOP-11097
 URL: https://issues.apache.org/jira/browse/HADOOP-11097
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11097.001.patch


 The KMS docs have the proxy parameters named proxyusers, not proxyuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11022) User replaced functions get lost 2-3 levels deep (e.g., sbin)

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11022:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 User replaced functions get lost 2-3 levels deep (e.g., sbin)
 -

 Key: HADOOP-11022
 URL: https://issues.apache.org/jira/browse/HADOOP-11022
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Critical
 Fix For: 3.0.0

 Attachments: HADOOP-11022.patch


 The code that protects hadoop-env.sh from being re-executed is also causing 
 functions that the user replaced to get overridden with the defaults.  This 
 typically happens when running commands that nest, such as most of the 
 content in sbin.  Just running stuff out of bin (e.g., bin/hdfs --daemon 
 start namenode) does not trigger this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11081) Document hadoop properties expected to be set by the shell code in *-env.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-11081:
-

Assignee: Allen Wittenauer

 Document hadoop properties expected to be set by the shell code in *-env.sh
 ---

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: newbie
 Attachments: HADOOP-11081-01.patch, HADOOP-11081.patch


 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11081) Document hadoop properties expected to be set by the shell code in *-env.sh

2014-09-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11081:
--
Attachment: HADOOP-11081-01.patch

-01:

* some text cleanup
* remove the setting for JAVA_HOME since we throw an error later and it breaks 
Mac OS X autodetect


 Document hadoop properties expected to be set by the shell code in *-env.sh
 ---

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
  Labels: newbie
 Attachments: HADOOP-11081-01.patch, HADOOP-11081.patch


 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11095) How about Null check when closing inputstream object in JavaKeyStoreProvider#() ?

2014-09-16 Thread skrho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

skrho updated HADOOP-11095:
---
Affects Version/s: 2.5.1
   Status: Patch Available  (was: Open)

 How about Null check when closing inputstream object in 
 JavaKeyStoreProvider#() ?
 -

 Key: HADOOP-11095
 URL: https://issues.apache.org/jira/browse/HADOOP-11095
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: skrho
Priority: Minor
 Attachments: HADOOP-11095_001.patch


 In the finally block:
   InputStream is = pwdFile.openStream();
   try {
 password = IOUtils.toCharArray(is);
   } finally {
 is.close();
   }
   
 How about Null check when closing inputstream object?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11016) KMS should support signing cookies with zookeeper secret manager

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-11016:
---

Assignee: Alejandro Abdelnur  (was: Arun Suresh)

 KMS should support signing cookies with zookeeper secret manager
 

 Key: HADOOP-11016
 URL: https://issues.apache.org/jira/browse/HADOOP-11016
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 This will allow supporting multiple KMS instances behind a load-balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-09-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136559#comment-14136559
 ] 

Andrew Wang commented on HADOOP-10922:
--

Sorry for the massive delay in reviewing this. LGTM +1, I'll commit this 
shortly. Thanks for sticking with this Larry!

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch, HADOOP-10922-2.patch, 
 HADOOP-10922-3.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10922) User documentation for CredentialShell

2014-09-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10922:
-
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks again Larry.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Fix For: 2.6.0

 Attachments: HADOOP-10922-1.patch, HADOOP-10922-2.patch, 
 HADOOP-10922-3.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-09-16 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136568#comment-14136568
 ] 

Larry McCay commented on HADOOP-10922:
--

Thanks, [~andrew.wang]!
I will carve out some time to get the key provider one done as well.


 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Fix For: 2.6.0

 Attachments: HADOOP-10922-1.patch, HADOOP-10922-2.patch, 
 HADOOP-10922-3.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11098) [jdk8] MaxDirectMemorySize default changed between JDK7 and 8

2014-09-16 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-11098:


 Summary: [jdk8] MaxDirectMemorySize default changed between JDK7 
and 8
 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Travis Thompson


I noticed this because the NameNode UI shows Max Non Heap Memory which after 
some digging I found correlates to MaxDirectMemorySize.

JDK7
{noformat}
Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
Heap Memory is 130 MB. 
{noformat}

JDK8
{noformat}
Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
Heap Memory is -1 B. 
{noformat}

More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11098) [JDK8] MaxDirectMemorySize default changed between JDK7 and 8

2014-09-16 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HADOOP-11098:
-
Summary: [JDK8] MaxDirectMemorySize default changed between JDK7 and 8  
(was: [jdk8] MaxDirectMemorySize default changed between JDK7 and 8)

 [JDK8] MaxDirectMemorySize default changed between JDK7 and 8
 -

 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Travis Thompson

 I noticed this because the NameNode UI shows Max Non Heap Memory which 
 after some digging I found correlates to MaxDirectMemorySize.
 JDK7
 {noformat}
 Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
 Heap Memory is 130 MB. 
 {noformat}
 JDK8
 {noformat}
 Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
 Heap Memory is -1 B. 
 {noformat}
 More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11098) [JDK8] MaxDirectMemorySize default changed between JDK7 and 8

2014-09-16 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136582#comment-14136582
 ] 

Travis Thompson commented on HADOOP-11098:
--

Knowing the GUI is served from JMX, I looked into JMX and found this:
{code}
  {
  beans : [ {
...
MemNonHeapMaxM : -9.536743E-7,
...
  },
{code}

which is getting set here:

{code:title=hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java}
  private void getMemoryUsage(MetricsRecordBuilder rb) {
MemoryUsage memNonHeap = memoryMXBean.getNonHeapMemoryUsage();
MemoryUsage memHeap = memoryMXBean.getHeapMemoryUsage();
Runtime runtime = Runtime.getRuntime();
rb.addGauge(MemNonHeapUsedM, memNonHeap.getUsed() / M)
  .addGauge(MemNonHeapCommittedM, memNonHeap.getCommitted() / M)
  .addGauge(MemNonHeapMaxM, memNonHeap.getMax() / M)
  .addGauge(MemHeapUsedM, memHeap.getUsed() / M)
  .addGauge(MemHeapCommittedM, memHeap.getCommitted() / M)
  .addGauge(MemHeapMaxM, memHeap.getMax() / M)
  .addGauge(MemMaxM, runtime.maxMemory() / M);
  }
{code}

According to [this | 
http://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryUsage.html#getMax--],
 getMax() returns -1 if the max is unlimited.
{quote}
public long getMax()
Returns the maximum amount of memory in bytes that can be used for memory 
management. This method returns -1 if the maximum memory size is undefined.
{quote}
And according to [this | 
http://docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm#BABGCFFB], you 
set the non-heap max with {{-XX:MaxDirectMemorySize}}, but it claims it's 
unlimited by default.
{quote}
Default Value
The default value is zero, which means the maximum direct memory is unbounded.
{quote}
This is obviously the case in JDK8, but I haven't been able to find any 
references to the default changing between JDK7 and JDK8 yet.

 [JDK8] MaxDirectMemorySize default changed between JDK7 and 8
 -

 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Travis Thompson

 I noticed this because the NameNode UI shows Max Non Heap Memory which 
 after some digging I found correlates to MaxDirectMemorySize.
 JDK7
 {noformat}
 Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
 Heap Memory is 130 MB. 
 {noformat}
 JDK8
 {noformat}
 Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
 Heap Memory is -1 B. 
 {noformat}
 More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-16 Thread Swapnil Daingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Daingade updated HADOOP-11044:
--
Attachment: 11044.patch6

 FileSystem counters can overflow for large number of readOps, largeReadOps, 
 writeOps
 

 Key: HADOOP-11044
 URL: https://issues.apache.org/jira/browse/HADOOP-11044
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.0, 2.4.1
Reporter: Swapnil Daingade
Priority: Minor
 Attachments: 11044.patch4, 11044.patch6


 The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
 readOps, largeReadOps, writeOps as int. Also the The 
 org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
 getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
 values can overflow if the exceed 2^31-1 showing negative values. It would be 
 nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-16 Thread Swapnil Daingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Daingade updated HADOOP-11044:
--
Attachment: (was: 11044.patch5)

 FileSystem counters can overflow for large number of readOps, largeReadOps, 
 writeOps
 

 Key: HADOOP-11044
 URL: https://issues.apache.org/jira/browse/HADOOP-11044
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.0, 2.4.1
Reporter: Swapnil Daingade
Priority: Minor
 Attachments: 11044.patch4, 11044.patch6


 The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
 readOps, largeReadOps, writeOps as int. Also the The 
 org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
 getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
 values can overflow if the exceed 2^31-1 showing negative values. It would be 
 nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11081) Document hadoop properties expected to be set by the shell code in *-env.sh

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136700#comment-14136700
 ] 

Hadoop QA commented on HADOOP-11081:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669267/HADOOP-11081-01.patch
  against trunk revision 33ce887.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4741//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4741//console

This message is automatically generated.

 Document hadoop properties expected to be set by the shell code in *-env.sh
 ---

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: newbie
 Attachments: HADOOP-11081-01.patch, HADOOP-11081.patch


 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11098) [JDK8] MaxDirectMemorySize default changed between JDK7 and 8

2014-09-16 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HADOOP-11098:
-
Affects Version/s: 2.3.0

 [JDK8] MaxDirectMemorySize default changed between JDK7 and 8
 -

 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Travis Thompson

 I noticed this because the NameNode UI shows Max Non Heap Memory which 
 after some digging I found correlates to MaxDirectMemorySize.
 JDK7
 {noformat}
 Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
 Heap Memory is 130 MB. 
 {noformat}
 JDK8
 {noformat}
 Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
 Heap Memory is -1 B. 
 {noformat}
 More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10946) Fix a bunch of typos in log messages

2014-09-16 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136736#comment-14136736
 ] 

Ray Chiang commented on HADOOP-10946:
-

It looks like the other three have been committed.  Is there anyone else who 
wants to comment or review this?

 Fix a bunch of typos in log messages
 

 Key: HADOOP-10946
 URL: https://issues.apache.org/jira/browse/HADOOP-10946
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.1
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10946-04.patch, HADOOP-10946-05.patch, 
 HADOOP-10946-06.patch, HADOOP10946-01.patch, HADOOP10946-02.patch, 
 HADOOP10946-03.patch


 There are a bunch of typos in various log messages.  These need cleaning up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11099) KMS return HTTP UNAUTHORIZED 401 on ACL failure

2014-09-16 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11099:
---

 Summary: KMS return HTTP UNAUTHORIZED 401 on ACL failure
 Key: HADOOP-11099
 URL: https://issues.apache.org/jira/browse/HADOOP-11099
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


The usual error, HTTP UNAUTHORIZED means is for authentication, not for 
authorization.

KMS should return HTTP FORBIDDEN in case of ACL failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11099) KMS return HTTP UNAUTHORIZED 401 on ACL failure

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11099:

Attachment: HADOOP-11099.patch

 KMS return HTTP UNAUTHORIZED 401 on ACL failure
 ---

 Key: HADOOP-11099
 URL: https://issues.apache.org/jira/browse/HADOOP-11099
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11099.patch


 The usual error, HTTP UNAUTHORIZED means is for authentication, not for 
 authorization.
 KMS should return HTTP FORBIDDEN in case of ACL failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11099) KMS return HTTP UNAUTHORIZED 401 on ACL failure

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11099:

Status: Patch Available  (was: Open)

no new testcase, existing testcases are exercising the ACL failure. And 
HTTP-11016 (blocked by this JIRA) is explicitly asserting FORBIDDEN on ACL 
failure.

 KMS return HTTP UNAUTHORIZED 401 on ACL failure
 ---

 Key: HADOOP-11099
 URL: https://issues.apache.org/jira/browse/HADOOP-11099
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11099.patch


 The usual error, HTTP UNAUTHORIZED means is for authentication, not for 
 authorization.
 KMS should return HTTP FORBIDDEN in case of ACL failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11016) KMS should support signing cookies with zookeeper secret manager

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11016:

Attachment: HADOOP-11016.patch

The patch is exclusively documentation and a testcase verifying the 
configuration for zookeeper, the change in KMSConfiguration is a constant 
rename shadowing a parent class constant (CONFIG_PREFIX).

 KMS should support signing cookies with zookeeper secret manager
 

 Key: HADOOP-11016
 URL: https://issues.apache.org/jira/browse/HADOOP-11016
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11016.patch


 This will allow supporting multiple KMS instances behind a load-balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-09-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136773#comment-14136773
 ] 

Hadoop QA commented on HADOOP-11044:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669296/11044.patch6
  against trunk revision 0e7d1db.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4742//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4742//console

This message is automatically generated.

 FileSystem counters can overflow for large number of readOps, largeReadOps, 
 writeOps
 

 Key: HADOOP-11044
 URL: https://issues.apache.org/jira/browse/HADOOP-11044
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.0, 2.4.1
Reporter: Swapnil Daingade
Priority: Minor
 Attachments: 11044.patch4, 11044.patch6


 The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
 readOps, largeReadOps, writeOps as int. Also the The 
 org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
 getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
 values can overflow if the exceed 2^31-1 showing negative values. It would be 
 nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-16 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136772#comment-14136772
 ] 

Alejandro Abdelnur commented on HADOOP-11062:
-

+1

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch, 
 HADOOP-11062.2.patch, HADOOP-11062.3.patch, HADOOP-11062.4.patch, 
 HADOOP-11062.5.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-16 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136775#comment-14136775
 ] 

Alejandro Abdelnur commented on HADOOP-11062:
-

[~asuresh], would you please rebase the patch? it fails to apply.

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch, 
 HADOOP-11062.2.patch, HADOOP-11062.3.patch, HADOOP-11062.4.patch, 
 HADOOP-11062.5.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11062:
-
Attachment: HADOOP-11062.6.patch

Rebasing patch..

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch, 
 HADOOP-11062.2.patch, HADOOP-11062.3.patch, HADOOP-11062.4.patch, 
 HADOOP-11062.5.patch, HADOOP-11062.6.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11018) KMS should load multiple kerberos principals

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HADOOP-11018.
-
Resolution: Duplicate

 KMS should load multiple kerberos principals
 

 Key: HADOOP-11018
 URL: https://issues.apache.org/jira/browse/HADOOP-11018
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 This would allow multiple KMS instance to serve behind a VIP and provide 
 direct access to a particular instance as well (ie for monitoring purposes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10982) KMS: Support for multiple Kerberos principals

2014-09-16 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10982:

Summary: KMS: Support for multiple Kerberos principals  (was: Multiple 
Kerberos principals for KMS)

 KMS: Support for multiple Kerberos principals
 -

 Key: HADOOP-10982
 URL: https://issues.apache.org/jira/browse/HADOOP-10982
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Alejandro Abdelnur

 The Key Management Server should support multiple Kerberos principals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)