[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144488#comment-14144488
 ] 

Hadoop QA commented on HADOOP-11017:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12670623/HADOOP-11017.11.patch
  against trunk revision a9a55db.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4790//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4790//console

This message is automatically generated.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.2.patch, HADOOP-11017.3.patch, 
> HADOOP-11017.4.patch, HADOOP-11017.5.patch, HADOOP-11017.6.patch, 
> HADOOP-11017.7.patch, HADOOP-11017.8.patch, HADOOP-11017.9.patch, 
> HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11101) How about inputstream close statement from catch block to finally block in FileContext#copy() ?

2014-09-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144552#comment-14144552
 ] 

Vinayakumar B commented on HADOOP-11101:


I think you can remove the catch block itself completely. No need to catch 
exception and throw it as it is.

> How about inputstream close statement from catch block to finally block in 
> FileContext#copy() ?
> ---
>
> Key: HADOOP-11101
> URL: https://issues.apache.org/jira/browse/HADOOP-11101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: skrho
>Priority: Minor
> Attachments: HADOOP-11101_001.patch
>
>
> If IOException is happended, can be catched exception block.. 
> But another excpetion is happended, can't be catched exception block.. also 
> Stream object can't be closed..
> try {
>   in = open(qSrc);
>   EnumSet createFlag = overwrite ? EnumSet.of(
>   CreateFlag.CREATE, CreateFlag.OVERWRITE) : 
> EnumSet.of(CreateFlag.CREATE);
>   out = create(qDst, createFlag);
>   IOUtils.copyBytes(in, out, conf, true);
> } catch (IOException e) {
>   IOUtils.closeStream(out);
>   IOUtils.closeStream(in);
>   throw e;
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144596#comment-14144596
 ] 

Steve Loughran commented on HADOOP-7:
-

I agree ... but this is something minimal which can go in with little/no work 
—though it looks like a couple of tests are not picking up the changed text.

What we could do long term is replicate some of the stuff we did for the 
networking errors: point to wiki pages. But how is anyone going to make sense 
of a {{NoMatchException}} ? I don't want to write the wiki page for that.

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-7-001.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144620#comment-14144620
 ] 

Steve Loughran commented on HADOOP-7:
-

Some of the test failures are spurious, the only regression appears to be 
{{TestUserGroupInformation}}

These tests are failing because the test code is doing an {{assertEquals}} on 
the expected string; the propagation of the underlying exception message is 
breaking this comparison.

Fix is what could have been done in the first place: use {{String.contains()}} 
as the probe

{code}
testConstructorWithKerberos(org.apache.hadoop.security.TestUserGroupInformation)
  Time elapsed: 0.045 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<...me user4@OTHER.REALM[]> but 
was:<...me user4@OTHER.REALM[: 
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No 
rules applied to user4@OTHER.REALM]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.security.TestUserGroupInformation.testConstructorFailures(TestUserGroupInformation.java:343)
at 
org.apache.hadoop.security.TestUserGroupInformation.testConstructorWithKerberos(TestUserGroupInformation.java:301)

testConstructorWithRules(org.apache.hadoop.security.TestUserGroupInformation)  
Time elapsed: 0.028 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<... user2@DEFAULT.REALM[]> but was:<... 
user2@DEFAULT.REALM[: 
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No 
rules applied to user2@DEFAULT.REALM]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.security.TestUserGroupInformation.testConstructorFailures(TestUserGroupInformation.java:343)
at 
org.apache.hadoop.security.TestUserGroupInformation.testConstructorWithRules(TestUserGroupInformation.java:283)
{code}

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-7-001.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11111) MiniKDC to use locale EN_US for case conversions

2014-09-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-1:

   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

committed to branch-2

> MiniKDC to use locale EN_US for case conversions
> 
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-1-001.patch
>
>
> The miniKDC cluster uses {{.equalsIgnoreCase()}}, and {{.toLower()}}, 
> {{.toUpper}} everywhere. While nobody uses this in production, it should be 
> fixed to ensure tests run consistently across all locales.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-7:

Attachment: HADOOP-7-002.patch

patch -002 which patches the test to make it less brittle to exception strings

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-7:

Status: Open  (was: Patch Available)

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-7:

   Fix Version/s: (was: 2.6.0)
Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11118) build hadoop by Java 7 in default

2014-09-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8.
-
Resolution: Duplicate

> build hadoop by Java 7 in default
> -
>
> Key: HADOOP-8
> URL: https://issues.apache.org/jira/browse/HADOOP-8
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Guo Ruijing
>
> currently, hadoop is build with java 6 in default as:
> 1.6
> 1.6
> we may change java 7 as default for hadoop build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11118) build hadoop by Java 7 in default

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144631#comment-14144631
 ] 

Steve Loughran commented on HADOOP-8:
-

duplicate of HADOOP-10530 .. please look at the common-dev archives for the 
history of this

> build hadoop by Java 7 in default
> -
>
> Key: HADOOP-8
> URL: https://issues.apache.org/jira/browse/HADOOP-8
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Guo Ruijing
>
> currently, hadoop is build with java 6 in default as:
> 1.6
> 1.6
> we may change java 7 as default for hadoop build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144638#comment-14144638
 ] 

Steve Loughran commented on HADOOP-10714:
-

You are right, HDFS does't quite follow posix. There's actually a difference 
between the command line {{mv}} operation and the internal rename API; I think 
the behaviour of bits of HDFS match the CLI.

HDFS cannot/will not change its behavior; the file system specification says 
"HDFS is the definition of the FS API". And like I also said "nobody really 
understands rename"...even the POSIX API isn't that great.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144643#comment-14144643
 ] 

Steve Loughran commented on HADOOP-10714:
-

patch-wise, it looks good; abstracting out the scale tests is something that 
can be done later -at least now it is configurable for S3a.

The test is failing on jenkins as you've added a test/resources/core-site.xml, 
which triggers the test run. Rather than do something complicated there, why not

# change the -aws POM to trigger the test off 
`resources/contract-test-options.xml` ... which is of course the file needed 
for all the contract tests.
# add to the `core-site.xml` the config options for the scale tests, along with 
the default values. This makes it easier to see what options to change.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144658#comment-14144658
 ] 

Hadoop QA commented on HADOOP-7:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12670680/HADOOP-7-002.patch
  against trunk revision 7aa667e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.random.TestOsSecureRandom

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4791//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4791//console

This message is automatically generated.

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11111) MiniKDC to use locale EN_US for case conversions

2014-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144666#comment-14144666
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #689 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/689/])
HADOOP-1 MiniKDC to use locale EN_US for case conversions (stevel: rev 
df52fec21dfc18c354f8b0c1ef187d7e272ad334)
* 
hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
HADOOP-1 MiniKDC to use locale EN_US for case conversions: 
hadoop-common/CHANGES.TXT (stevel: rev 7aa667eefa255002cf7853ba51affbbd4a490c02)
* hadoop-common-project/hadoop-common/CHANGES.txt


> MiniKDC to use locale EN_US for case conversions
> 
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-1-001.patch
>
>
> The miniKDC cluster uses {{.equalsIgnoreCase()}}, and {{.toLower()}}, 
> {{.toUpper}} everywhere. While nobody uses this in production, it should be 
> fixed to ensure tests run consistently across all locales.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11111) MiniKDC to use locale EN_US for case conversions

2014-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144798#comment-14144798
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1905 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1905/])
HADOOP-1 MiniKDC to use locale EN_US for case conversions (stevel: rev 
df52fec21dfc18c354f8b0c1ef187d7e272ad334)
* 
hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
HADOOP-1 MiniKDC to use locale EN_US for case conversions: 
hadoop-common/CHANGES.TXT (stevel: rev 7aa667eefa255002cf7853ba51affbbd4a490c02)
* hadoop-common-project/hadoop-common/CHANGES.txt


> MiniKDC to use locale EN_US for case conversions
> 
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-1-001.patch
>
>
> The miniKDC cluster uses {{.equalsIgnoreCase()}}, and {{.toLower()}}, 
> {{.toUpper}} everywhere. While nobody uses this in production, it should be 
> fixed to ensure tests run consistently across all locales.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Juan Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144829#comment-14144829
 ] 

Juan Yu commented on HADOOP-10714:
--

the failed test is the new rename test {{TestLocalFSContractRename}} I added to 
abstract contract. it fails {{RawLocalFileSystem}} enforce POSIX rename 
behavior which is different than HDFS.

{code}
public boolean rename(Path src, Path dst) throws IOException {
// Attempt rename using Java API.
File srcFile = pathToFile(src);
File dstFile = pathToFile(dst);
if (srcFile.renameTo(dstFile)) {
  return true;
}

// Enforce POSIX rename behavior that a source directory replaces an 
existing
// destination if the destination is an empty directory.  On most platforms,
// this is already handled by the Java API call above.  Some platforms
// (notably Windows) do not provide this behavior, so the Java API call 
above
// fails.  Delete destination and attempt rename again.
{code}



> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11111) MiniKDC to use locale EN_US for case conversions

2014-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144821#comment-14144821
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1880/])
HADOOP-1 MiniKDC to use locale EN_US for case conversions (stevel: rev 
df52fec21dfc18c354f8b0c1ef187d7e272ad334)
* 
hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
HADOOP-1 MiniKDC to use locale EN_US for case conversions: 
hadoop-common/CHANGES.TXT (stevel: rev 7aa667eefa255002cf7853ba51affbbd4a490c02)
* hadoop-common-project/hadoop-common/CHANGES.txt


> MiniKDC to use locale EN_US for case conversions
> 
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-1-001.patch
>
>
> The miniKDC cluster uses {{.equalsIgnoreCase()}}, and {{.toLower()}}, 
> {{.toUpper}} everywhere. While nobody uses this in production, it should be 
> fixed to ensure tests run consistently across all locales.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144949#comment-14144949
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

I am comfortable with getting this addendum in, it just undoes a couple of 
changes from the original patch. However, I would like for someone more 
familiar with this code to take a look. [~vinodkv], [~tucu00] - can any of you 
review this? 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.2.patch, HADOOP-11017.3.patch, 
> HADOOP-11017.4.patch, HADOOP-11017.5.patch, HADOOP-11017.6.patch, 
> HADOOP-11017.7.patch, HADOOP-11017.8.patch, HADOOP-11017.9.patch, 
> HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10421) Enable Kerberos profiled UTs to run with IBM JAVA

2014-09-23 Thread Jinghui Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinghui Wang reassigned HADOOP-10421:
-

Assignee: Jinghui Wang

> Enable Kerberos profiled UTs to run with IBM JAVA
> -
>
> Key: HADOOP-10421
> URL: https://issues.apache.org/jira/browse/HADOOP-10421
> Project: Hadoop Common
>  Issue Type: Test
>  Components: security, test
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10421.patch
>
>
> KerberosTestUtils in hadoop-auth does not support IBM JAVA, which has 
> different Krb5LoginModule configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-10714:
-
Attachment: HADOOP-10714.004.patch

Since we cannot change HDFS's rename behavior, I guess the same for 
RawLocalFileSystem.
I modified the new rename test to accept both POSIX rename behavior and CLI 
rename behavior.


> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10987) Provide an iterator-based listing API for FileSystem

2014-09-23 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10987:

Attachment: HADOOP-10987.v2.patch

> Provide an iterator-based listing API for FileSystem
> 
>
> Key: HADOOP-10987
> URL: https://issues.apache.org/jira/browse/HADOOP-10987
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HADOOP-10987.patch, HADOOP-10987.v2.patch
>
>
> Iterator based listing methods already exist in {{FileContext}} for both 
> simple listing and listing with locations. However, {{FileSystem}} lacks the 
> former.  From what I understand, it wasn't added to {{FileSystem}} because it 
> was believed to be phased out soon. Since {{FileSystem}} is very well alive 
> today and new features are getting added frequently, I propose adding an 
> iterator based {{listStatus}} method. As for the name of the new method, we 
> can use the same name used in {{FileContext}} : {{listStatusIterator()}}.
> It will be particularly useful when listing giant directories. Without this, 
> the client has to build up a huge data structure and hold it in memory. We've 
> seen client JVMs running out of memory because of this.
> Once this change is made, we can modify FsShell, etc. in followup jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-09-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145070#comment-14145070
 ] 

Allen Wittenauer commented on HADOOP-7:
---

+1 lgtm.  

> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11092) hadoop shell commands should print usage if not given a class

2014-09-23 Thread Jennifer Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145078#comment-14145078
 ] 

Jennifer Davis commented on HADOOP-11092:
-

+1 

> hadoop shell commands should print usage if not given a class
> -
>
> Key: HADOOP-11092
> URL: https://issues.apache.org/jira/browse/HADOOP-11092
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Bruno Mahé
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch
>
>
> [root@bigtop-fedora-15 ~]# hdfs foobar
> Exception in thread "main" java.lang.NoClassDefFoundError: foobar
> Caused by: java.lang.ClassNotFoundException: foobar
> at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
> Could not find the main class: foobar. Program will exit.
> Instead of loading any class, it would be nice to explain the command is not 
> valid and to call print_usage()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145107#comment-14145107
 ] 

Jian He commented on HADOOP-11017:
--

Now, the storeNewMasterKey is invoked inside the synchronized block. if ZK is 
unavailable. The whole class will be blocked. 
{code}
synchronized (this) {
  currentId = newKey.getKeyId();
  currentKey = newKey;
  storeDelegationKey(currentKey);
}
{code}
Irrespective of this, I think YARN also has a bug. RM should do updateMasterKey 
instead of storeNewMasterKey while it's rolling the key. 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.2.patch, HADOOP-11017.3.patch, 
> HADOOP-11017.4.patch, HADOOP-11017.5.patch, HADOOP-11017.6.patch, 
> HADOOP-11017.7.patch, HADOOP-11017.8.patch, HADOOP-11017.9.patch, 
> HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145119#comment-14145119
 ] 

Hadoop QA commented on HADOOP-10714:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12670744/HADOOP-10714.004.patch
  against trunk revision a1fd804.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//console

This message is automatically generated.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145151#comment-14145151
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11017:
--

bq. However, I would like for someone more familiar with this code to take a 
look. 
[~jianhe] wrote most of this code w.r.t key/token persistence (which I reviewed 
earlier), he should be able to review/commit the addendum.

[~jianhe], can you please file the YARN ticket and link it here? Tx.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.2.patch, HADOOP-11017.3.patch, 
> HADOOP-11017.4.patch, HADOOP-11017.5.patch, HADOOP-11017.6.patch, 
> HADOOP-11017.7.patch, HADOOP-11017.8.patch, HADOOP-11017.9.patch, 
> HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145190#comment-14145190
 ] 

Jian He commented on HADOOP-11017:
--

opened YARN-2589 for YARN side fix.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.2.patch, HADOOP-11017.3.patch, 
> HADOOP-11017.4.patch, HADOOP-11017.5.patch, HADOOP-11017.6.patch, 
> HADOOP-11017.7.patch, HADOOP-11017.8.patch, HADOOP-11017.9.patch, 
> HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11092) hadoop shell commands should print usage if not given a class

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145196#comment-14145196
 ] 

Steve Loughran commented on HADOOP-11092:
-

+1

> hadoop shell commands should print usage if not given a class
> -
>
> Key: HADOOP-11092
> URL: https://issues.apache.org/jira/browse/HADOOP-11092
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Bruno Mahé
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch
>
>
> [root@bigtop-fedora-15 ~]# hdfs foobar
> Exception in thread "main" java.lang.NoClassDefFoundError: foobar
> Caused by: java.lang.ClassNotFoundException: foobar
> at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
> Could not find the main class: foobar. Program will exit.
> Instead of loading any class, it would be nice to explain the command is not 
> valid and to call print_usage()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11092) hadoop shell commands should print usage if not given a class

2014-09-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11092:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks!

Committed to trunk.

> hadoop shell commands should print usage if not given a class
> -
>
> Key: HADOOP-11092
> URL: https://issues.apache.org/jira/browse/HADOOP-11092
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Bruno Mahé
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch
>
>
> [root@bigtop-fedora-15 ~]# hdfs foobar
> Exception in thread "main" java.lang.NoClassDefFoundError: foobar
> Caused by: java.lang.ClassNotFoundException: foobar
> at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
> Could not find the main class: foobar. Program will exit.
> Instead of loading any class, it would be nice to explain the command is not 
> valid and to call print_usage()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11017:
-
Attachment: HADOOP-11017.12.patch

Updating patch.. 

Summary of change :
* The {{addPersistedDelegationToken()}} and {{addKey()}} methods are actually 
called as part of the {{recover()}} method which is invoked after state is 
recovered from external store (ZK) and before the DelegationTokenSecretManager 
is 'activated' (startThreads() is called). 
* Considering the fact that in the data is already in the external store, the 
patch just modifies the local {{allKeys}} and {{currentTokens}} maps.
* Patch 11 fixed only the {{addKey()}} method.. Patch 12 takes case of 
{{addPersistedDelegationToken()}} as well

[~kasha], I understand your concern about the synchronized block, but if ZK is 
unavailable, technically in the Active-Active case (which the ZKDTSM is trying 
to address).. my opinion is that this should block the DTSM.. since the update 
has to be persisted, before proceeding, else verification of a DelegationToken 
on a peer node might fail. (Also, prior to the patch, if you look at the 
{{createPassword()}} method (which is synchronized)... it used to call the 
{{storeNewToken()}} in which the RM state store made a call to ZK)

Ran the following tests in hadoop-yarn to make sure things arn't broken :

{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.158 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore
Running 
org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.361 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens
Running 
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.719 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
Running org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 189.65 sec - 
in org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
Running 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.757 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
Running 
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart

Results :

Tests run: 37, Failures: 0, Errors: 0, Skipped: 0
{noformat}

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11119) TrashPolicyDefault init pushes messages to command line

2014-09-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-9:
-

 Summary: TrashPolicyDefault init pushes messages to command line
 Key: HADOOP-9
 URL: https://issues.apache.org/jira/browse/HADOOP-9
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Minor


During a fresh install of trunk:

{code}
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -put /etc/hosts /tmp
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -rm /tmp/hosts
14/09/23 13:05:46 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /tmp/hosts
{code}

The info message for the Namenode trash configuration isn't very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10987) Provide an iterator-based listing API for FileSystem

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145353#comment-14145353
 ] 

Hadoop QA commented on HADOOP-10987:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12670745/HADOOP-10987.v2.patch
  against trunk revision a1fd804.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1265 javac 
compiler warnings (more than the trunk's current 1263 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.crypto.random.TestOsSecureRandom
  org.apache.hadoop.ha.TestZKFailoverControllerStress
  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4792//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4792//artifact/PreCommit-HADOOP-Build-patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4792//console

This message is automatically generated.

> Provide an iterator-based listing API for FileSystem
> 
>
> Key: HADOOP-10987
> URL: https://issues.apache.org/jira/browse/HADOOP-10987
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HADOOP-10987.patch, HADOOP-10987.v2.patch
>
>
> Iterator based listing methods already exist in {{FileContext}} for both 
> simple listing and listing with locations. However, {{FileSystem}} lacks the 
> former.  From what I understand, it wasn't added to {{FileSystem}} because it 
> was believed to be phased out soon. Since {{FileSystem}} is very well alive 
> today and new features are getting added frequently, I propose adding an 
> iterator based {{listStatus}} method. As for the name of the new method, we 
> can use the same name used in {{FileContext}} : {{listStatusIterator()}}.
> It will be particularly useful when listing giant directories. Without this, 
> the client has to build up a huge data structure and hold it in memory. We've 
> seen client JVMs running out of memory because of this.
> Once this change is made, we can modify FsShell, etc. in followup jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11119) TrashPolicyDefault init pushes messages to command line

2014-09-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145379#comment-14145379
 ] 

Allen Wittenauer commented on HADOOP-9:
---

Better yet:

{code}
bin/hadoop fs -rm -r /a?
14/09/23 13:28:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /a1
14/09/23 13:28:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /a2
14/09/23 13:28:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /a3
14/09/23 13:28:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /a4
{code}

> TrashPolicyDefault init pushes messages to command line
> ---
>
> Key: HADOOP-9
> URL: https://issues.apache.org/jira/browse/HADOOP-9
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Minor
>
> During a fresh install of trunk:
> {code}
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -put /etc/hosts /tmp
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -rm /tmp/hosts
> 14/09/23 13:05:46 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> Deleted /tmp/hosts
> {code}
> The info message for the Namenode trash configuration isn't very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145388#comment-14145388
 ] 

Jian He commented on HADOOP-11017:
--

the changes for fixing YARN looks fine to me.  Regarding KMS, not sure the 
following change is intentional, just want to bring up.
{{updateMasterKey(key)}} is invoked inside {{updateDelegationKey(currentKey)}}, 
but I think the {{currentKey}} passed in at this time is still the old 
currentKey not the updated key.
{code}
synchronized (this) {
  removeExpiredKeys();
  currentKey.setExpiryDate(Time.now() + tokenMaxLifetime);
  updateDelegationKey(currentKey);
}
{code}

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2014-09-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11120:
-

 Summary: hadoop fs -rmr gives wrong advice
 Key: HADOOP-11120
 URL: https://issues.apache.org/jira/browse/HADOOP-11120
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


Typing bin/hadoop fs -rmr /a?

gives the output:

rmr: DEPRECATED: Please use 'rm -r' instead.

Typing bin/hadoop fs rm -r /a?

gives the output:

rm: Unknown command




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp

2014-09-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145423#comment-14145423
 ] 

Allen Wittenauer commented on HADOOP-11009:
---

It doesn't look like -p automatically preserves time stamp:
{code}
$ bin/hadoop distcp -p /tmp /a1
...
$ bin/hadoop distcp -pt /tmp /a2
...
$ bin/hadoop fs -ls -R /
drwxr-x---   - aw hdfs  0 2014-09-23 13:52 /a1
-rw-r-   3 aw hdfs236 2014-09-23 13:52 /a1/hosts
drwxr-x---   - aw hdfs  0 2014-09-23 13:16 /a2
-rw-r-   3 aw hdfs236 2014-09-23 13:16 /a2/hosts
...
drwxr-x---   - aw hdfs  0 2014-09-23 13:16 /tmp
-rw-r-   3 aw hdfs236 2014-09-23 13:16 /tmp/hosts
...
{code}

> Add Timestamp Preservation to DistCp
> 
>
> Key: HADOOP-11009
> URL: https://issues.apache.org/jira/browse/HADOOP-11009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.4.0
>Reporter: Gary Steelman
>Assignee: Gary Steelman
> Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, 
> HADOOP-11009.3.patch
>
>
> Currently access and modification times are not preserved on files copied 
> using DistCp. This patch adds an option to DistCp for timestamp preservation. 
> The patch ready, but I understand there is a Contributor form I need to sign 
> before I can upload it. Can someone point me in the right direction for this 
> form? Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145439#comment-14145439
 ] 

Hadoop QA commented on HADOOP-11017:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12670790/HADOOP-11017.12.patch
  against trunk revision 3dc28e2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4794//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4794//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4794//console

This message is automatically generated.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145444#comment-14145444
 ] 

Steve Loughran commented on HADOOP-10714:
-

OK, in this situation we're going to need to make the test handle both outcomes.

# {{ContractOptions}} already lists a set of rename flags -are there any which 
match the operation? 
# If so, check it in the test and make the relevant assertions for whichever 
outcome has been declared as supported. that is, if it says it renames like 
posix, it had better.
# if you have to add a new option & put it in local file, then so be it ... at 
least we've expanded the declarative list of different behaviours.

thanks for getting involved in the depths of cross-FS-rename semantics BTW.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-09-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145451#comment-14145451
 ] 

Allen Wittenauer commented on HADOOP-8989:
--

[~daryn], were you going to commit this or should I?

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11121) native libraries guide is extremely out of date

2014-09-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11121:
--
Summary: native libraries guide is extremely out of date  (was: native 
libraries guide still mentions pre-built native libs)

> native libraries guide is extremely out of date
> ---
>
> Key: HADOOP-11121
> URL: https://issues.apache.org/jira/browse/HADOOP-11121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: documentation, newbie
>
> The native libraries guide says 
> The pre-built 32-bit i386-Linux native hadoop library is available as part of 
> the hadoop distribution and is located in the lib/native directory. 
> ... this hasn't been true for a while.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11121) native libraries guide still mentions pre-built native libs

2014-09-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11121:
-

 Summary: native libraries guide still mentions pre-built native 
libs
 Key: HADOOP-11121
 URL: https://issues.apache.org/jira/browse/HADOOP-11121
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Minor


The native libraries guide says 

The pre-built 32-bit i386-Linux native hadoop library is available as part of 
the hadoop distribution and is located in the lib/native directory. 

... this hasn't been true for a while.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11121) native libraries guide is extremely out of date

2014-09-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11121:
--
Description: 
The native libraries guide says a few things that hasn't been true for a very 
long time:

* RHEL 4

* autotools instead of cmake

* The pre-built 32-bit i386-Linux native hadoop library is available as part of 
the hadoop distribution and is located in the lib/native directory. 


... and probably more.

  was:
The native libraries guide says 

The pre-built 32-bit i386-Linux native hadoop library is available as part of 
the hadoop distribution and is located in the lib/native directory. 

... this hasn't been true for a while.


> native libraries guide is extremely out of date
> ---
>
> Key: HADOOP-11121
> URL: https://issues.apache.org/jira/browse/HADOOP-11121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: documentation, newbie
>
> The native libraries guide says a few things that hasn't been true for a very 
> long time:
> * RHEL 4
> * autotools instead of cmake
> * The pre-built 32-bit i386-Linux native hadoop library is available as part 
> of the hadoop distribution and is located in the lib/native directory. 
> ... and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-10714:
-
Attachment: HADOOP-10714.005.patch

Thanks [~ste...@apache.org]. I added a new ContractOption for the rename 
behavior. This contract-driven FS test suite is very flexible and be able to 
handle various case. Great job on that.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch, 
> HADOOP-10714.005.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145531#comment-14145531
 ] 

Hadoop QA commented on HADOOP-10714:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12670819/HADOOP-10714.005.patch
  against trunk revision b93d960.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4795//console

This message is automatically generated.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch, 
> HADOOP-10714.005.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145574#comment-14145574
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

Good point, Jian. It looks like rollMasterKey should call updateCurrentKey 
before calling updateDelegationKey(currentKey). 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145582#comment-14145582
 ] 

Jian He commented on HADOOP-11017:
--

Or we could move the {{updateMasterKey(key);}} into the {{updateCurrentKey}} 
method.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145584#comment-14145584
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

Looking at the findbugs errors and the source, I don't quite see the need for 
get/set delegationTokenSeqNum either. Can we get rid of those methods? 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145609#comment-14145609
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

Actually, I see why the get/set/incr methods are added, but it looks like there 
are a couple of inconsistencies. ZKDTSM should either maintain its own counter 
or re-use delegationTokenSeqNumber. 
# If its using its own counter, methods in ADTSM should be synchronized.
# If its re-using delegationTokenSeqNumber from ADTSM, (1) we don't need a 
getter, (2) incr should probably call set so the value is updated, (3) set 
should lock on write. 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145616#comment-14145616
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

Given RM HA is broken without the current addendum, I propose getting that in 
and filing a follow-up JIRA (blocker to 2.6) to fix the review comments. 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145621#comment-14145621
 ] 

Jian He commented on HADOOP-11017:
--

+1 for the current patch.  Agree, let's get this in first and do rest in a 
follow-up jira. 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11122) Fix findbugs and order of key updates Abstract/ZK DelegationTokenSecretManagers

2014-09-23 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-11122:
-

 Summary: Fix findbugs and order of key updates Abstract/ZK 
DelegationTokenSecretManagers 
 Key: HADOOP-11122
 URL: https://issues.apache.org/jira/browse/HADOOP-11122
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Arun Suresh
Priority: Blocker


HADOOP-11017 adds ZK implementation for DelegationTokenSecretManager. This is a 
follow-up JIRA to address review comments there - findbugs and order of updates 
to the {{currentKey}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145631#comment-14145631
 ] 

Karthik Kambatla commented on HADOOP-11017:
---

Filed HADOOP-11122 for the follow-up work.

Thanks Jian. I ll go ahead and commit the latest addendum. 

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-23 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11017:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Just committed the addendum to trunk and branch-2.

> KMS delegation token secret manager should be able to use zookeeper as store
> 
>
> Key: HADOOP-11017
> URL: https://issues.apache.org/jira/browse/HADOOP-11017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, 
> HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, 
> HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
> HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
> HADOOP-11017.9.patch, HADOOP-11017.WIP.patch
>
>
> This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10041) UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even if the kerberos ticket cache is non-renewable

2014-09-23 Thread Raviteja Chirala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145673#comment-14145673
 ] 

Raviteja Chirala commented on HADOOP-10041:
---

Any update on this jira would be helpful. 

> UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even 
> if the kerberos ticket cache is non-renewable
> -
>
> Key: HADOOP-10041
> URL: https://issues.apache.org/jira/browse/HADOOP-10041
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.1-alpha
>Reporter: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.6.0
>
>
> UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew user 
> credentials.  However, it does this even if the kerberos ticket cache in 
> question is non-renewable.
> This leads to an annoying error message being printed out all the time.
> {code}
> cmccabe@keter:/h> klist
> Ticket cache: FILE:/tmp/krb5cc_1014
> Default principal: hdfs/ke...@cloudera.com
> Valid starting ExpiresService principal
> 07/18/12 15:24:15  07/19/12 15:24:13  krbtgt/cloudera@cloudera.com
> {code}
> {code}
> cmccabe@keter:/h> ./bin/hadoop fs -ls /
> 15:21:39,882  WARN UserGroupInformation:739 - Exception encountered while 
> running the renewal command. Aborting renew thread. 
> org.apache.hadoop.util.Shell$ExitCodeException: kinit: KDC can't fulfill 
> requested option while renewing credentials
> Found 3 items
> -rw-r--r--   3 cmccabe users   0 2012-07-09 17:15 /b
> -rw-r--r--   3 hdfssupergroup  0 2012-07-09 17:17 /c
> drwxrwxrwx   - cmccabe audio   0 2012-07-19 11:25 /tmp
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10809) hadoop-azure: page blob support

2014-09-23 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HADOOP-10809:
-
Attachment: HADOOP-10809.04.patch

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10809) hadoop-azure: page blob support

2014-09-23 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145703#comment-14145703
 ] 

Eric Hanson commented on HADOOP-10809:
--

Backing up work in progress. Source files merged. Test files about 1/3 merged. 
Still need to run tests to make sure everything still works.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-10714:
-
Attachment: HADOOP-10714.006.patch

Same patch as previous to trigger Jenkins build.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch, 
> HADOOP-10714.005.patch, HADOOP-10714.006.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145859#comment-14145859
 ] 

Hadoop QA commented on HADOOP-10714:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12670888/HADOOP-10714.006.patch
  against trunk revision ef784a2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws:

  org.apache.hadoop.crypto.random.TestOsSecureRandom

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//console

This message is automatically generated.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch, 
> HADOOP-10714.005.patch, HADOOP-10714.006.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-09-23 Thread Juan Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145884#comment-14145884
 ] 

Juan Yu commented on HADOOP-10714:
--

I don't think the failed test is related with this patch.

> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --
>
> Key: HADOOP-10714
> URL: https://issues.apache.org/jira/browse/HADOOP-10714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0
>Reporter: David S. Wang
>Assignee: Juan Yu
>Priority: Critical
>  Labels: s3
> Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, 
> HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10714.004.patch, 
> HADOOP-10714.005.patch, HADOOP-10714.006.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
> to have the number of entries at 1000 or below. Otherwise we get a Malformed 
> XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
> MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
> did not validate against our published schema, S3 Extended Request ID: 
> DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you 
> want to delete. In the XML, you provide the object key names, and optionally, 
> version IDs if you want to delete a specific version of the object from a 
> versioning-enabled bucket. For each key, Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-7984) Add hadoop command verbose and debug options

2014-09-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-7984:
-

Assignee: Akira AJISAKA

> Add hadoop command verbose and debug options
> 
>
> Key: HADOOP-7984
> URL: https://issues.apache.org/jira/browse/HADOOP-7984
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-7984.patch
>
>
> It would be helpful if bin/hadoop had verbose and debug flags. Currently 
> users need to set an env variable or prefix the command (eg 
> "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user 
> friendly. We currently log INFO by default. How about we only log ERROR and 
> WARN by default, then -verbose triggers INFO and -debug triggers DEBUG?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2014-09-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145909#comment-14145909
 ] 

Vinayakumar B commented on HADOOP-11120:


I believe you might have got the full error message as below
{noformat}rm: Unknown command
Did you mean -rm?  This command begins with a dash.{noformat}

I feel its correct. ''rmr" and "rm" are commands. And to use any commands with 
fsshell we should prefix them with '-'.


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)