[jira] [Commented] (HADOOP-13001) RetryPolicies$RetryUpToMaximumTimeWithFixedSleep raises division by zero exception if the sleep time is 0, even if max wait == 0

2016-04-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253357#comment-15253357
 ] 

Larry McCay commented on HADOOP-13001:
--

Please ignore the above message. The related commit inadvertently referenced 
this JIRA instead of HADOOP-13011.

> RetryPolicies$RetryUpToMaximumTimeWithFixedSleep raises division by zero 
> exception if the sleep time is 0, even if max wait == 0
> 
>
> Key: HADOOP-13001
> URL: https://issues.apache.org/jira/browse/HADOOP-13001
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> set YARN RM max wait and retry intervals to 0, try to talk to an RM, get an 
> arithmetic exception
> {code}
> Caused by: java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.io.retry.RetryPolicies$RetryUpToMaximumTimeWithFixedSleep.(RetryPolicies.java:265)
> at 
> org.apache.hadoop.io.retry.RetryPolicies.retryUpToMaximumTimeWithFixedSleep(RetryPolicies.java:89)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRetryPolicy(RMProxy.java:237)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:91)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:188)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.spark.deploy.history.yarn.server.YarnHistoryProvider.initYarnClient(YarnHistoryProvider.scala:931)
> at 
> org.apache.spark.deploy.history.yarn.server.YarnHistoryProvider.init(YarnHistoryProvider.scala:296)
> {code}
> I'd have expected the code to recognise the caller is saying "don't retry", 
> but clearly not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-21 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13011:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2 and branch-2.8.
Thanks for the reviews [~ste...@apache.org].

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch, HADOOP-13011-004.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253305#comment-15253305
 ] 

Andrew Wang commented on HADOOP-13011:
--

Hey Larry, did you forget to resolve this JIRA? Status is still Patch Available.

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch, HADOOP-13011-004.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253295#comment-15253295
 ] 

Hadoop QA commented on HADOOP-13018:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 6 
new + 80 unchanged - 0 fixed = 86 total (was 80) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 32s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 50s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800141/HADOOP-13018.01.patch 
|
| JIRA Issue | HADOOP-13018 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9f59b571bf40 3.13.0-36-lowlatency 

[jira] [Updated] (HADOOP-13049) Fix the TestFailures After HADOOP-12563

2016-04-21 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13049:
--
Summary: Fix the TestFailures After HADOOP-12563  (was: Fix the 
TestFailures After HADOOP-12653)

> Fix the TestFailures After HADOOP-12563
> ---
>
> Key: HADOOP-13049
> URL: https://issues.apache.org/jira/browse/HADOOP-13049
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0
>
>
> Following test fails after this in..
> TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
>  » IllegalState
> TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 
> » IllegalState
> TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
>  » IllegalState
> TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
>  » IllegalState
> See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13049) Fix the TestFailures After HADOOP-12653

2016-04-21 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13049:
--
Fix Version/s: 3.0.0

> Fix the TestFailures After HADOOP-12653
> ---
>
> Key: HADOOP-13049
> URL: https://issues.apache.org/jira/browse/HADOOP-13049
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0
>
>
> Following test fails after this in..
> TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
>  » IllegalState
> TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 
> » IllegalState
> TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
>  » IllegalState
> TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
>  » IllegalState
> See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13049) Fix the TestFailures After HADOOP-12653

2016-04-21 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13049:
--
Affects Version/s: 3.0.0

> Fix the TestFailures After HADOOP-12653
> ---
>
> Key: HADOOP-13049
> URL: https://issues.apache.org/jira/browse/HADOOP-13049
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0
>
>
> Following test fails after this in..
> TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
>  » IllegalState
> TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 
> » IllegalState
> TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
>  » IllegalState
> TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
>  » IllegalState
> See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13049) Fix the TestFailures After HADOOP-12653

2016-04-21 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-13049:
-

 Summary: Fix the TestFailures After HADOOP-12653
 Key: HADOOP-13049
 URL: https://issues.apache.org/jira/browse/HADOOP-13049
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Following test fails after this in..

TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
 » IllegalState
TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 » 
IllegalState
TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
 » IllegalState
TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
 » IllegalState
See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-04-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253257#comment-15253257
 ] 

Kai Zheng commented on HADOOP-13010:


Thanks [~lirui] a lot for the help and working on the revision! I will do some 
review first today.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders

2016-04-21 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-13010:

Attachment: HADOOP-13010-v4.patch

Update patch on behalf of Kai. Considering the patch is already very large, 
we'd like to leave some changes as follow-on tasks.
# The previous util methods in {{CodecUtil}} are not removed so that code in 
HDFS doesn't need change, for the time being.
# Erasure coders may also need refactoring as raw coders.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253240#comment-15253240
 ] 

Brahma Reddy Battula commented on HADOOP-12563:
---

Following test fails after this inSince jenkins did not run on YARN and 
MAPREDUCE projects, these tests are missed..

   
TestRMContainerAllocator.testRMContainerAllocatorResendsRequestsOnRMRestart:2535
 » IllegalState
   TestContainerManagerRecovery.testApplicationRecovery:189->startContainer:511 
» IllegalState
  
TestContainerManagerRecovery.testContainerCleanupOnShutdown:412->startContainer:511
 » IllegalState
  
TestContainerManagerRecovery.testContainerResizeRecovery:351->startContainer:511
 » IllegalState

See https://builds.apache.org/job/Hadoop-Yarn-trunk/2051/

{noformat}

FAILED:  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery.testApplicationRecovery

Error Message:
InputStream#read(byte[]) returned invalid result: 0 The InputStream 
implementation is buggy.

Stack Trace:
java.lang.IllegalStateException: InputStream#read(byte[]) returned invalid 
result: 0 The InputStream implementation is buggy.
at 
com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:739)
at 
com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
at 
com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at 
org.apache.hadoop.security.proto.SecurityProtos$CredentialsProto.(SecurityProtos.java:1828)
at 
org.apache.hadoop.security.proto.SecurityProtos$CredentialsProto.(SecurityProtos.java:1792)
at 
org.apache.hadoop.security.proto.SecurityProtos$CredentialsProto$1.parsePartialFrom(SecurityProtos.java:1892)
at 
org.apache.hadoop.security.proto.SecurityProtos$CredentialsProto$1.parsePartialFrom(SecurityProtos.java:1887)
at 
com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at 
org.apache.hadoop.security.proto.SecurityProtos$CredentialsProto.parseFrom(SecurityProtos.java:2100)
at 
org.apache.hadoop.security.Credentials.readProtos(Credentials.java:331)
at 
org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
at 
org.apache.hadoop.yarn.server.utils.YarnServerSecurityUtils.parseCredentials(YarnServerSecurityUtils.java:131)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:924)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:815)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery$3.run(TestContainerManagerRecovery.java:514)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery$3.run(TestContainerManagerRecovery.java:511)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery.startContainer(TestContainerManagerRecovery.java:511)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery.testApplicationRecovery(TestContainerManagerRecovery.java:189)
{noformat}

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can 

[jira] [Commented] (HADOOP-13001) RetryPolicies$RetryUpToMaximumTimeWithFixedSleep raises division by zero exception if the sleep time is 0, even if max wait == 0

2016-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253164#comment-15253164
 ] 

Hudson commented on HADOOP-13001:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9651 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9651/])
HADOOP-13001 - Clearly Document the Password Details for Keystore-based 
(lmccay: rev 3ba490763e5dfcd6ee0def4c63405c20b2721c8c)
* hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md


> RetryPolicies$RetryUpToMaximumTimeWithFixedSleep raises division by zero 
> exception if the sleep time is 0, even if max wait == 0
> 
>
> Key: HADOOP-13001
> URL: https://issues.apache.org/jira/browse/HADOOP-13001
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> set YARN RM max wait and retry intervals to 0, try to talk to an RM, get an 
> arithmetic exception
> {code}
> Caused by: java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.io.retry.RetryPolicies$RetryUpToMaximumTimeWithFixedSleep.(RetryPolicies.java:265)
> at 
> org.apache.hadoop.io.retry.RetryPolicies.retryUpToMaximumTimeWithFixedSleep(RetryPolicies.java:89)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRetryPolicy(RMProxy.java:237)
> at 
> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:91)
> at 
> org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:188)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.spark.deploy.history.yarn.server.YarnHistoryProvider.initYarnClient(YarnHistoryProvider.scala:931)
> at 
> org.apache.spark.deploy.history.yarn.server.YarnHistoryProvider.init(YarnHistoryProvider.scala:296)
> {code}
> I'd have expected the code to recognise the caller is saying "don't retry", 
> but clearly not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13044) Amazon S3 library depends on http components 4.3

2016-04-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253160#comment-15253160
 ] 

Kai Sasaki commented on HADOOP-13044:
-

[~iwasakims] [~ste...@apache.org] Thanks for checking.
We are using the latest AWS SDK (1.10.60) because the old version of AWS SDK 
does not work with JDK8 due to [authentication 
error|https://github.com/aws/aws-sdk-java/issues/484]. According to this 
ticket, the same problem is occurred in at least v1.10.10. v1.10.6 might have 
same problem. 

I'll track HADOOP-12767. Can I close this as duplicated?

> Amazon S3 library depends on http components 4.3
> 
>
> Key: HADOOP-13044
> URL: https://issues.apache.org/jira/browse/HADOOP-13044
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 2.8.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13044.01.patch
>
>
> In case of using AWS SDK in the classpath of hadoop, we faced an issue caused 
> by incompatiblity of AWS SDK and httpcomponents.
> {code}
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
> {code}
> The latest AWS SDK depends on 4.3.x which has 
> [DefaultConnectionKeepAliveStrategy.INSTANCE|http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.html#INSTANCE].
>  This field is introduced from 4.3.
> This will allow us to avoid {{CLASSPATH}} confliction around httpclient 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13035) AbstractService should set state only after state change

2016-04-21 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253127#comment-15253127
 ] 

Bibin A Chundatt commented on HADOOP-13035:
---

[~leftnoteasy]
Could  you please review the same.

> AbstractService should set state only after state change
> 
>
> Key: HADOOP-13035
> URL: https://issues.apache.org/jira/browse/HADOOP-13035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
> Attachments: 0001-HADOOP-13035.patch, 0002-HADOOP-13035.patch
>
>
> As per the discussion in YARN-3971 the we should be setting the service state 
> to STARTED only after serviceStart() 
> Currently {{AbstractService#start()}} is set
> {noformat} 
>  if (stateModel.enterState(STATE.STARTED) != STATE.STARTED) {
> try {
>   startTime = System.currentTimeMillis();
>   serviceStart();
> ..
>  }
> {noformat}
> enterState sets the service state to proposed state. So in 
> {{service.getServiceState}} in {{serviceStart()}} will return STARTED .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253081#comment-15253081
 ] 

Hadoop QA commented on HADOOP-12751:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
46s {color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-common-project: patch generated 1 new + 93 
unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 8s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 44s {color} 
| {color:red} hadoop-auth in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 22s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 117m 8s {color} 
| {color:black} {color} 

[jira] [Updated] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Status: Patch Available  (was: Open)

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HADOOP-13018.01.patch
>
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Attachment: HADOOP-13018.01.patch

This patch improves Kdiag so that it checks the existence and validity of 
hadoop.token.files

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HADOOP-13018.01.patch
>
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reassigned HADOOP-13018:
-

Assignee: Ravi Prakash

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13018) Make Kdiag check hadoop.token.files whether points to existent and valid files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Summary: Make Kdiag check hadoop.token.files whether points to existent and 
valid files  (was: Make Kdiag fail fast if hadoop.token.files points to 
non-existent file)

> Make Kdiag check hadoop.token.files whether points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Summary: Make Kdiag check whether hadoop.token.files points to existent and 
valid files  (was: Make Kdiag check hadoop.token.files whether points to 
existent and valid files)

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-04-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252899#comment-15252899
 ] 

Andrew Wang commented on HADOOP-12893:
--

I got both my above patches in, thanks Steve for reviewing.

LEGAL-247 was resolved in favor of merging licenses, meaning we fortunately 
won't need a zillion copies of BSD/ALv2/etc. I think the overall goal here is 
to make our best effort at being honest about our dependencies. I surveyed a 
few different Apache projects, and there's no one true style for LICENSE and 
NOTICE.

[~ajisakaa] and [~xiaochen] generously volunteered to help with this. I created 
a Google spreadsheet that we can use to hopefully auto-generate the 
LICENSE/NOTICE files later on. Ping me if you want access to that.

Besides that, I could also use some help to figure out how to copy the LICENSE 
and NOTICE into our JAR files. HBase does this (related JIRA HBASE-14085), so 
we might be able to reuse their pom work.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252877#comment-15252877
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12957:
--

Thanks for the patch.  We could simply use 
[Semaphore|http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html]
 instead of adding a new class RateLimiter.

Please also add some tests.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13043) Add LICENSE.txt entries for bundled javascript dependencies

2016-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252844#comment-15252844
 ] 

Hudson commented on HADOOP-13043:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9649 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9649/])
HADOOP-13043. Add LICENSE.txt entries for bundled javascript (wang: rev 
08b7efa95202b6d6ada143cab9369fac4ebb4c49)
* LICENSE.txt


> Add LICENSE.txt entries for bundled javascript dependencies
> ---
>
> Key: HADOOP-13043
> URL: https://issues.apache.org/jira/browse/HADOOP-13043
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13043.001.patch, hadoop-13043.002.patch
>
>
> None of our bundled javascript dependencies are mentioned in LICENSE.txt. 
> Let's fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13042) Restore lost leveldbjni LICENSE and NOTICE changes

2016-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252843#comment-15252843
 ] 

Hudson commented on HADOOP-13042:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9649 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9649/])
HADOOP-13042. Restore lost leveldbjni LICENSE and NOTICE changes. (wang: rev 
fea50c5440d83225958c5e346299334559fc37a4)
* LICENSE.txt
* NOTICE.txt


> Restore lost leveldbjni LICENSE and NOTICE changes
> --
>
> Key: HADOOP-13042
> URL: https://issues.apache.org/jira/browse/HADOOP-13042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13042.001.patch
>
>
> As noted on HADOOP-12893, we lost the leveldbjni related NOTICE and LICENSE 
> updates done in YARN-1704 when HADOOP-10956 was committed. Let's restore them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-21 Thread Bolke de Bruin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bolke de Bruin updated HADOOP-12751:

Attachment: 0008-HADOOP-12751-leave-user-validation-to-os.patch

Updated tests to support Malformed Kerberos name

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-04-21 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12291:
-
Fix Version/s: (was: 2.8.0)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Attachments: HADOOP-12291.001.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252800#comment-15252800
 ] 

Hadoop QA commented on HADOOP-12751:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-common-project: patch generated 1 new + 93 
unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 57s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 54s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 11s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 11s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 42s {color} 
| {color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252774#comment-15252774
 ] 

Hadoop QA commented on HADOOP-12291:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 9 
new + 34 unchanged - 0 fixed = 43 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 18s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800060/HADOOP-12291.001.patch
 |
| JIRA Issue | HADOOP-12291 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Updated] (HADOOP-13043) Add LICENSE.txt entries for bundled javascript dependencies

2016-04-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13043:
-
   Resolution: Fixed
Fix Version/s: 2.6.5
   2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

Pushed to all the branches, thanks Steve. I made appropriate edits based on the 
contents of each branch.

> Add LICENSE.txt entries for bundled javascript dependencies
> ---
>
> Key: HADOOP-13043
> URL: https://issues.apache.org/jira/browse/HADOOP-13043
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13043.001.patch, hadoop-13043.002.patch
>
>
> None of our bundled javascript dependencies are mentioned in LICENSE.txt. 
> Let's fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13042) Restore lost leveldbjni LICENSE and NOTICE changes

2016-04-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13042:
-
   Resolution: Fixed
Fix Version/s: 2.6.5
   2.7.3
   3.0.0
   2.8.0
   Status: Resolved  (was: Patch Available)

Pushed through branch-2.6, thanks for the review Steve!

> Restore lost leveldbjni LICENSE and NOTICE changes
> --
>
> Key: HADOOP-13042
> URL: https://issues.apache.org/jira/browse/HADOOP-13042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13042.001.patch
>
>
> As noted on HADOOP-12893, we lost the leveldbjni related NOTICE and LICENSE 
> updates done in YARN-1704 when HADOOP-10956 was committed. Let's restore them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13042) Restore lost leveldbjni LICENSE and NOTICE changes

2016-04-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13042:
-
Fix Version/s: (was: 3.0.0)

> Restore lost leveldbjni LICENSE and NOTICE changes
> --
>
> Key: HADOOP-13042
> URL: https://issues.apache.org/jira/browse/HADOOP-13042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13042.001.patch
>
>
> As noted on HADOOP-12893, we lost the leveldbjni related NOTICE and LICENSE 
> updates done in YARN-1704 when HADOOP-10956 was committed. Let's restore them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-04-21 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-12723:
-
Description: 
Although S3A currently has built-in support for 
{{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
{{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
{{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
support any other credentials provider that implements the 
{{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the ability 
to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance will 
expand the options for S3 credentials, such as:

* temporary credentials from STS, e.g. via 
{{com.amazonaws.auth.STSSessionCredentialsProvider}}
* IAM role-based credentials, e.g. via 
{{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
* a custom credentials provider that satisfies one's own needs, e.g. 
bucket-specific credentials, user-specific credentials, etc.

To support this, we can add a configuration for the fully qualified class name 
of a credentials provider, to be loaded by {{S3AFileSystem.initialize(URI, 
Configuration)}}.

The configured credentials provider should implement 
{{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
accepts {{(URI uri, Configuration conf)}}.



  was:
Although S3A currently has built-in support for 
{{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
{{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
{{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
support any other credentials provider that implements the 
{{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the ability 
to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance will 
expand the options for S3 credentials, such as:

* temporary credentials from STS, e.g. via 
{{com.amazonaws.auth.STSSessionCredentialsProvider}}
* IAM role-based credentials, e.g. via 
{{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
* a custom credentials provider that satisfies one's own needs, e.g. 
bucket-specific credentials, user-specific credentials, etc.

To support this, we can add a configuration for the fully qualified class name 
of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} and 
added to its credentials provider chain.

The configured credentials provider should implement 
{{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
accepts {{(URI uri, Configuration conf)}}.




> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch, HADOOP-12723.1.patch, 
> HADOOP-12723.2.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by 
> {{S3AFileSystem.initialize(URI, Configuration)}}.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252596#comment-15252596
 ] 

Hadoop QA commented on HADOOP-12891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 0s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 17s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} 

[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-04-21 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Attachment: HADOOP-12291.001.patch

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-04-21 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
   Labels: features patch  (was: )
Fix Version/s: 2.8.0
Affects Version/s: 2.8.0
 Target Version/s: 2.8.0
   Status: Patch Available  (was: In Progress)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: patch, features
> Fix For: 2.8.0
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13047) S3a Forward seek in stream length to be configurable

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252541#comment-15252541
 ] 

Steve Loughran commented on HADOOP-13047:
-

+ add all bytes skipped to the statistics of bytes read. Currently they aren't

> S3a Forward seek in stream length to be configurable
> 
>
> Key: HADOOP-13047
> URL: https://issues.apache.org/jira/browse/HADOOP-13047
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Even with lazy seek, tests can show that sometimes a short-distance forward 
> seek is triggering a close + reopen, because the threshold for the seek is 
> simply available bytes in the inner stream.
> A configurable threshold would allow data to be read and discarded before 
> that seek. This should be beneficial over long-haul networks as the time to 
> set up the TCP channel is high, and TCP-slow-start means that the ramp up of 
> bandwidth is slow. In such deployments, it will better to read forward than 
> re-open, though the exact "best" number will vary with client and endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-13029) Have FairCallQueue try all lower priority sub queues before backoff

2016-04-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252533#comment-15252533
 ] 

Arpit Agarwal edited comment on HADOOP-13029 at 4/21/16 7:48 PM:
-

Hi [~mingma], backoff has two good properties:
# Prevents stalling the IPC reader thread indefinitely.
# Throttles clients when the NameNode is under load by signaling congestion.

Spilling to lower priority queue with FairCallQueue delays the 'congestion' 
signal so it makes (2) less effective. The networking world seems to think 
earlier notification of congestion is better e.g. [TCP 
RED|https://en.wikipedia.org/wiki/Random_early_detection], 
[ECN|https://en.wikipedia.org/wiki/Explicit_Congestion_Notification] and 
delay-based congestion control.

bq. A heavy user generates lots of rpc requests, but it only filled up 1/4 of 
the lowest priority sub queue. However that is enough to cause lock contention 
with DN RPC requests.
[~xyao] recently introduced HADOOP-12916 with the goal of addressing the same 
problem.


was (Author: arpitagarwal):
Hi [~mingma], backoff has two goals:
# Prevent stalling the IPC reader thread indefinitely.
# Throttle clients when the NameNode is under load by signaling congestion.

Spilling to lower priority queues delays the 'congestion' signal so it makes 
(2) less effective. The networking world seems to think earlier notification of 
congestion is better e.g. [TCP 
RED|https://en.wikipedia.org/wiki/Random_early_detection], 
[ECN|https://en.wikipedia.org/wiki/Explicit_Congestion_Notification] and 
delay-based congestion control.

bq. A heavy user generates lots of rpc requests, but it only filled up 1/4 of 
the lowest priority sub queue. However that is enough to cause lock contention 
with DN RPC requests.
[~xyao] recently introduced HADOOP-12916 with the goal of addressing the same 
problem.

> Have FairCallQueue try all lower priority sub queues before backoff
> ---
>
> Key: HADOOP-13029
> URL: https://issues.apache.org/jira/browse/HADOOP-13029
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> Currently if FairCallQueue and backoff are enabled, backoff will kick in as 
> soon as the assigned sub queue is filled up.
> {noformat}
>   /**
>* Put and offer follow the same pattern:
>* 1. Get the assigned priorityLevel from the call by scheduler
>* 2. Get the nth sub-queue matching this priorityLevel
>* 3. delegate the call to this sub-queue.
>*
>* But differ in how they handle overflow:
>* - Put will move on to the next queue until it lands on the last queue
>* - Offer does not attempt other queues on overflow
>*/
> {noformat}
> Seems it is better to try lower priority sub queues when the assigned sub 
> queue is full, just like the case when backoff is disabled. This will give 
> regular users more opportunities and allow the cluster to be configured with 
> smaller call queue length. [~chrili], [~arpitagarwal], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13029) Have FairCallQueue try all lower priority sub queues before backoff

2016-04-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252533#comment-15252533
 ] 

Arpit Agarwal commented on HADOOP-13029:


Hi [~mingma], backoff has two goals:
# Prevent stalling the IPC reader thread indefinitely.
# Throttle clients when the NameNode is under load by signaling congestion.

Spilling to lower priority queues delays the 'congestion' signal so it makes 
(2) less effective. The networking world seems to think earlier notification of 
congestion is better e.g. [TCP 
RED|https://en.wikipedia.org/wiki/Random_early_detection], 
[ECN|https://en.wikipedia.org/wiki/Explicit_Congestion_Notification] and 
delay-based congestion control.

bq. A heavy user generates lots of rpc requests, but it only filled up 1/4 of 
the lowest priority sub queue. However that is enough to cause lock contention 
with DN RPC requests.
[~xyao] recently introduced HADOOP-12916 with the goal of addressing the same 
problem.

> Have FairCallQueue try all lower priority sub queues before backoff
> ---
>
> Key: HADOOP-13029
> URL: https://issues.apache.org/jira/browse/HADOOP-13029
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> Currently if FairCallQueue and backoff are enabled, backoff will kick in as 
> soon as the assigned sub queue is filled up.
> {noformat}
>   /**
>* Put and offer follow the same pattern:
>* 1. Get the assigned priorityLevel from the call by scheduler
>* 2. Get the nth sub-queue matching this priorityLevel
>* 3. delegate the call to this sub-queue.
>*
>* But differ in how they handle overflow:
>* - Put will move on to the next queue until it lands on the last queue
>* - Offer does not attempt other queues on overflow
>*/
> {noformat}
> Seems it is better to try lower priority sub queues when the assigned sub 
> queue is full, just like the case when backoff is disabled. This will give 
> regular users more opportunities and allow the cluster to be configured with 
> smaller call queue length. [~chrili], [~arpitagarwal], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13047) S3a Forward seek in stream length to be configurable

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252524#comment-15252524
 ] 

Steve Loughran commented on HADOOP-13047:
-

I propose

# having a configurable default value
# implementing {{setReadAhead(long)}} to allow code to dynamically tune this 
value. It does't quite control the pre-load, but it is tuning how far ahead the 
stream can read before triggering an expensive seek.

> S3a Forward seek in stream length to be configurable
> 
>
> Key: HADOOP-13047
> URL: https://issues.apache.org/jira/browse/HADOOP-13047
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Even with lazy seek, tests can show that sometimes a short-distance forward 
> seek is triggering a close + reopen, because the threshold for the seek is 
> simply available bytes in the inner stream.
> A configurable threshold would allow data to be read and discarded before 
> that seek. This should be beneficial over long-haul networks as the time to 
> set up the TCP channel is high, and TCP-slow-start means that the ramp up of 
> bandwidth is slow. In such deployments, it will better to read forward than 
> re-open, though the exact "best" number will vary with client and endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13047) S3a Forward seek in stream length to be configurable

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252518#comment-15252518
 ] 

Steve Loughran commented on HADOOP-13047:
-

Output of a test run from my  laptop (E over Power to base station, BT FTTC 
80Mbit down/15 up), connected to amazon US.

This is with lazy seek; the log is showing initial getPos() value, then final 
getPos(); actual will 
 Looks like 1s is time to open a connection

{code}

2016-04-21 20:26:32,979 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of stat = 187,328,000 ns
2016-04-21 20:26:33,149 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of open = 169,326,000 ns
2016-04-21 20:26:33,149 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:33,496 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of read() [pos = 0] = 345,822,000 ns
2016-04-21 20:26:33,496 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1 nextReadPos=1 
contentLength=20299927}
2016-04-21 20:26:33,496 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:33,497 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(256) [pos = 1] = 17,000 ns
2016-04-21 20:26:33,497 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1 nextReadPos=256 
contentLength=20299927}
2016-04-21 20:26:33,497 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:33,497 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(256) [pos = 256] = 17,000 ns
2016-04-21 20:26:33,498 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1 nextReadPos=256 
contentLength=20299927}
2016-04-21 20:26:33,499 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:33,499 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(EOF-2) [pos = 256] = 12,000 ns
2016-04-21 20:26:33,499 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1 nextReadPos=20299925 
contentLength=20299927}
2016-04-21 20:26:33,499 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:34,423 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of read() [pos = 20299925] = 922,650,000 ns
2016-04-21 20:26:34,423 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=20299926 
nextReadPos=20299926 contentLength=20299927}
2016-04-21 20:26:34,423 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:34,758 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1, byte[1]) [pos = 20299926] = 333,713,000 ns
2016-04-21 20:26:34,758 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=2 nextReadPos=20299926 
contentLength=20299927}
2016-04-21 20:26:34,758 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 

// series of forward looking reads
2016-04-21 20:26:35,767 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1, byte[256]) [pos = 20299926] = 1,008,214,000 ns
2016-04-21 20:26:35,767 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=257 nextReadPos=20299926 
contentLength=20299927}
2016-04-21 20:26:35,767 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 

next read is in available(), so cost is 61 microseconds
2016-04-21 20:26:35,768 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(260, byte[256]) [pos = 20299926] = 61,000 ns
2016-04-21 20:26:35,768 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=516 nextReadPos=20299926 
contentLength=20299927}
2016-04-21 20:26:35,768 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 

As is this one, just under 512 bytes ahead
2016-04-21 20:26:35,768 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1024, byte[256]) [pos = 20299926] = 23,000 ns
2016-04-21 20:26:35,769 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1280 nextReadPos=20299926 
contentLength=20299927}
2016-04-21 20:26:35,769 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-04-21 20:26:35,769 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1536, byte[256]) [pos = 20299926] = 28,000 ns
2016-04-21 20:26:35,769 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
S3AInputStream{s3a://landsat-pds/scene_list.gz pos=1792 nextReadPos=20299926 
contentLength=20299927}
2016-04-21 20:26:35,769 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 

// going forward to 8192 bytes triggers a full close and read
2016-04-21 20:26:38,634 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(8192, byte[1024]) [pos = 20299926] = 

[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252498#comment-15252498
 ] 

Hadoop QA commented on HADOOP-12563:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} root: patch generated 0 new + 7 unchanged - 27 fixed 
= 7 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 0s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 37s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 5s 

[jira] [Created] (HADOOP-13048) Improvements to StatD metrics2 sink

2016-04-21 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13048:
--

 Summary: Improvements to StatD metrics2 sink
 Key: HADOOP-13048
 URL: https://issues.apache.org/jira/browse/HADOOP-13048
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Xiao Chen
Priority: Minor


In some recent offline review of feature HADOOP-12360, [~jojochuang] has some 
good comments. The feature is overall a nice feature, but can have some 
improvements:

- Validation should be more robust:
{code}
public void init(SubsetConfiguration conf) {
// Get StatsD host configurations.
final String serverHost = conf.getString(SERVER_HOST_KEY);
final int serverPort = Integer.parseInt(conf.getString(SERVER_PORT_KEY));
{code}
- Javadoc should be more accurate:
** Inconsistency host.name v.s. hostname
** Could have better explanation regarding service name and process name
- {{StatsDSink#writeMetric}} should be private.
- Hopefully a document about this and other metric sinks.

Thanks Wei-Chiu and [~dlmarion] for the contribution!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13047) S3a Forward seek in stream length to be configurable

2016-04-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13047:
---

 Summary: S3a Forward seek in stream length to be configurable
 Key: HADOOP-13047
 URL: https://issues.apache.org/jira/browse/HADOOP-13047
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


Even with lazy seek, tests can show that sometimes a short-distance forward 
seek is triggering a close + reopen, because the threshold for the seek is 
simply available bytes in the inner stream.

A configurable threshold would allow data to be read and discarded before that 
seek. This should be beneficial over long-haul networks as the time to set up 
the TCP channel is high, and TCP-slow-start means that the ramp up of bandwidth 
is slow. In such deployments, it will better to read forward than re-open, 
though the exact "best" number will vary with client and endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252480#comment-15252480
 ] 

Hudson commented on HADOOP-12563:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9646 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9646/])
HADOOP-12563. Updated utility (dtutil) to create/modify token files. (raviprak: 
rev 4838b735f0d472765f402fe6b1c8b6ce85b9fbf1)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFetcher.java
* hadoop-common-project/hadoop-common/src/main/proto/Security.proto
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestDtUtilShell.java
* 
hadoop-common-project/hadoop-common/src/test/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/CommandShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/WebHdfsDtFetcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsDtFetcher.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tools/TestCommandShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestDtFetcher.java
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SWebHdfsDtFetcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md


> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13028) add counter and timer metrics for S3A HTTP & low-level operations

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252474#comment-15252474
 ] 

Steve Loughran commented on HADOOP-13028:
-

I'd like counters in {{FileSystem}} of 

Actions on individual blobs
# create
# stat
# copy
# delete

+ list (path)

In FS input stream:

* of times stream closed 
* count of times aborted
* stream reopened due to forward seek, backwards seek; 
* re-opened due to IO problem.

I'd also like these counters to be visible to tests; at the very least the 
toString() operator should dump it, but ideally: raw counters. Why? Lets me 
write tests which actually compare the no. of times actions take place (e.g. 
forward-seek-closures()) and look at tuning the code for that, which can be 
done a lot more deterministically than just measuring test duration in some 
microbenchmark



> add counter and timer metrics for S3A HTTP & low-level operations
> -
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13028) add counter and timer metrics for S3A HTTP & low-level operations

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Summary: add counter and timer metrics for S3A HTTP & low-level operations  
(was: add counter and timer metrics for S3A operations)

> add counter and timer metrics for S3A HTTP & low-level operations
> -
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12891:

Status: Patch Available  (was: Open)

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> --
>
> Key: HADOOP-12891
> URL: https://issues.apache.org/jira/browse/HADOOP-12891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: HADOOP-12891-001.patch, HADOOP-12891-002.patch
>
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
> /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
> private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
> /** Default minimum size of each part for multi-part copy. */
> private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
> /* Handle copies in the same way as uploads. */
> transferConfiguration.setMultipartCopyPartSize(partSize);
> transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> In any case at a minimum if neither of the above options are acceptable 
> changes the config documentation should be adjusted to match the code, noting 
> that {{fs.s3a.multipart.threshold}} and {{fs.s3a.multipart.size}} are 
> applicable to uploads of new objects only and not copies (i.e. renaming 
> objects).
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-12563:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
 Release Note: This feature introduces a new command called "hadoop dtutil" 
which lets users request and download delegation tokens with certain attributes.
   Status: Resolved  (was: Patch Available)

+1. LGTM. Committed to trunk

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252418#comment-15252418
 ] 

Ravi Prakash commented on HADOOP-12563:
---

Thanks Matt for all your work and for your deep insight Steve! Committing 
shortly

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-04-21 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252384#comment-15252384
 ] 

Ray Chiang commented on HADOOP-12738:
-

[~ajisakaa] or [~iwasakims], let me know if you have the spare time to look at 
it.

> Create unit test to automatically compare Common related classes and 
> core-default.xml
> -
>
> Key: HADOOP-12738
> URL: https://issues.apache.org/jira/browse/HADOOP-12738
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12738.001.patch, HADOOP-12738.002.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> Common related classes and core-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13029) Have FairCallQueue try all lower priority sub queues before backoff

2016-04-21 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252286#comment-15252286
 ] 

Ming Ma commented on HADOOP-13029:
--

Thanks [~daryn]! Here is the issue we had that motivates this jira, but after 
offline discussion with [~chrilisf] and team members, we feel like tuning 
FairCallQueue configs should achieve the same result.

With FariCallQueue and backoff, we don't get much complaints regarding one 
abusive user's impact on other users. The main issue we currently have is a 
heavy user's impact on datanode service rpc requests which has been increasing 
as we continue to expand our cluster size. FairCallQueue is only for client 
RPC, not for datanode RPC. There was some discussion in HADOOP-10599 about 
this. Specifically:

* A heavy user generates lots of rpc requests, but it only filled up 1/4 of the 
lowest priority sub queue. However that is enough to cause lock contention with 
DN RPC requests.
* So to have backoff kick in sooner for the heavy user, we can reduce the rpc 
sub queue length. But that will impact all rpc sub queues.
* After the call queue length reduction, if lots of light users belonging to p0 
come in at the same time, some light users will get backed off, given p0 sub 
queue is much smaller than before. Thus if it can overflow to the next queue, 
light users at least won't get backed off.

However, several configs tuning including client and service rpc handler count 
and FairCallQueue weight adjustment should be able to achieve the same result.

On a related note, if FairCallQueue is used but backoff is disabled, as 
mentioned in the description, put method will move on to the next queue until 
it lands on the last queue. It isn't clear why it can't just block on the 
corresponding sub queue instead. In other words, what is the reason overflow is 
useful for the block case, to reduce the chance that the reader threads being 
blocked? Still, it seems configs tuning can also achieve that, similar to the 
argument for the backoff case.


> Have FairCallQueue try all lower priority sub queues before backoff
> ---
>
> Key: HADOOP-13029
> URL: https://issues.apache.org/jira/browse/HADOOP-13029
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> Currently if FairCallQueue and backoff are enabled, backoff will kick in as 
> soon as the assigned sub queue is filled up.
> {noformat}
>   /**
>* Put and offer follow the same pattern:
>* 1. Get the assigned priorityLevel from the call by scheduler
>* 2. Get the nth sub-queue matching this priorityLevel
>* 3. delegate the call to this sub-queue.
>*
>* But differ in how they handle overflow:
>* - Put will move on to the next queue until it lands on the last queue
>* - Offer does not attempt other queues on overflow
>*/
> {noformat}
> Seems it is better to try lower priority sub queues when the assigned sub 
> queue is full, just like the case when backoff is disabled. This will give 
> regular users more opportunities and allow the cluster to be configured with 
> smaller call queue length. [~chrili], [~arpitagarwal], what do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252248#comment-15252248
 ] 

Hadoop QA commented on HADOOP-12942:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 54s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.net.TestDNS |
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1271/HADOOP-12942.005.patch
 |
| JIRA Issue | HADOOP-12942 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| 

[jira] [Commented] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2016-04-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252167#comment-15252167
 ] 

Chris Nauroth commented on HADOOP-12550:


Hello [~GergelyNovak].

Thank you for the suggestion, but unfortunately, that wouldn't quite provide 
the expected semantics.  Callers of rename typically have an expectation of 
atomicity, such that the rename either succeeds completely or fails completely, 
with no visible in-between states.  With the proposed change, if the process 
crashes or the host powers down after the delete, but before the rename, then 
the destination file is permanently lost.  In the specific case described in 
the example, a DataNode could lose a block.

There are already a few spots in the codebase where we do use a similar 
workaround for Windows, but it's not ideal.  I'd prefer for the scope of this 
issue to be providing an atomic rename-with-replace operation.

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-21 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12957:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-12909)

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13045) hadoop_add_classpath is not working in .hadooprc

2016-04-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252143#comment-15252143
 ] 

Allen Wittenauer commented on HADOOP-13045:
---

Today's shower thought:

What if we turned the current .hadooprc support into .hadoopenv and add a new 
.hadooprc hook that gets called after the initialization is done such that 
functions work?

> hadoop_add_classpath is not working in .hadooprc
> 
>
> Key: HADOOP-13045
> URL: https://issues.apache.org/jira/browse/HADOOP-13045
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>
> {{hadoop_basic_function}} resets {{CLASSPATH}} after {{.hadooprc}} is called.
> {noformat}
> $ hadoop --debug version
> (snip)
> DEBUG: Applying the user's .hadooprc
> DEBUG: Initial CLASSPATH=/root/hadoop-tools-0.1-SNAPSHOT.jar
> DEBUG: Initialize CLASSPATH
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/build/native
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/lib/native
> DEBUG: Initial CLASSPATH=/usr/local/hadoop/share/hadoop/common/lib/*
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-04-21 Thread Andrew Olson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252140#comment-15252140
 ] 

Andrew Olson commented on HADOOP-12891:
---

Thanks Steve, looks good from here.

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> --
>
> Key: HADOOP-12891
> URL: https://issues.apache.org/jira/browse/HADOOP-12891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: HADOOP-12891-001.patch, HADOOP-12891-002.patch
>
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
> /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
> private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
> /** Default minimum size of each part for multi-part copy. */
> private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
> /* Handle copies in the same way as uploads. */
> transferConfiguration.setMultipartCopyPartSize(partSize);
> transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> In any case at a minimum if neither of the above options are acceptable 
> changes the config documentation should be adjusted to match the code, noting 
> that {{fs.s3a.multipart.threshold}} and {{fs.s3a.multipart.size}} are 
> applicable to uploads of new objects only and not copies (i.e. renaming 
> objects).
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12891:

Attachment: HADOOP-12891-002.patch

Patch 002: Adds the documentation.

Excluding the docs, this patch is andrew's work, just converted into a .patch 
file. Therefore I still consider myself in a position to be a reviewer. 
However, I'd still like others with s3 access to test this, just to make sure 
there aren't surprises.

My tests were against Amazon S3 Ireland, BTW

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> --
>
> Key: HADOOP-12891
> URL: https://issues.apache.org/jira/browse/HADOOP-12891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: HADOOP-12891-001.patch, HADOOP-12891-002.patch
>
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
> /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
> private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
> /** Default minimum size of each part for multi-part copy. */
> private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
> /* Handle copies in the same way as uploads. */
> transferConfiguration.setMultipartCopyPartSize(partSize);
> transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> In any case at a minimum if neither of the above options are acceptable 
> changes the config documentation should be adjusted to match the code, noting 
> that {{fs.s3a.multipart.threshold}} and {{fs.s3a.multipart.size}} are 
> applicable to uploads of new objects only and not copies (i.e. renaming 
> objects).
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12891:

Status: Open  (was: Patch Available)

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> --
>
> Key: HADOOP-12891
> URL: https://issues.apache.org/jira/browse/HADOOP-12891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Attachments: HADOOP-12891-001.patch
>
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
> /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
> private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
> /** Default minimum size of each part for multi-part copy. */
> private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
> /* Handle copies in the same way as uploads. */
> transferConfiguration.setMultipartCopyPartSize(partSize);
> transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> In any case at a minimum if neither of the above options are acceptable 
> changes the config documentation should be adjusted to match the code, noting 
> that {{fs.s3a.multipart.threshold}} and {{fs.s3a.multipart.size}} are 
> applicable to uploads of new objects only and not copies (i.e. renaming 
> objects).
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: HADOOP-12563.13.patch

{code}

patch 12-13 diff:

< +  " does not require a token.  Check your configuration.  " +
---
> +  "' does not require a token.  Check your configuration.  " +
 
< +  throw new Exception(message);
---
> +  throw new IllegalArgumentException(message);


{code}

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.005.patch

Patch 005 is identical to 004, but adds documentation in CommandsManual.md

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: (was: dtutil_diff_07_08)

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, dtutil-test-out, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-21 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-04-21 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12291 started by Esther Kundin.
--
> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252070#comment-15252070
 ] 

Steve Loughran commented on HADOOP-12767:
-

Linked to HADOOP-9991

Know that updating JARs inevitably breaks something —especially downstream, so 
we are usually pretty reluctant to do it. If it is a security problem, then it 
should get more attention, but it's still something we are cautious about.

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Attachments: HADOOP-12767-branch-2.004.patch, HADOOP-12767.001.patch, 
> HADOOP-12767.002.patch, HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12767:

Affects Version/s: (was: 3.0.0)
   2.7.2
  Component/s: build

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Attachments: HADOOP-12767-branch-2.004.patch, HADOOP-12767.001.patch, 
> HADOOP-12767.002.patch, HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12866) add a subcommand for gridmix

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252048#comment-15252048
 ] 

Hadoop QA commented on HADOOP-12866:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-gridmix in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-gridmix in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12799983/HADOOP-12866.01.patch 
|
| JIRA Issue | HADOOP-12866 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 0a40b3366e6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7da5847 |
| shellcheck | v0.4.3 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9141/testReport/ |
| modules | C:  hadoop-common-project/hadoop-common   
hadoop-tools/hadoop-gridmix  U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9141/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12866) add a subcommand for gridmix

2016-04-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-12866:

Attachment: HADOOP-12866.01.patch

> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12866) add a subcommand for gridmix

2016-04-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki reassigned HADOOP-12866:
---

Assignee: Kai Sasaki

> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12866) add a subcommand for gridmix

2016-04-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-12866:

Status: Patch Available  (was: Open)

> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13044) Amazon S3 library depends on http components 4.3

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13044:

Affects Version/s: 2.8.0

> Amazon S3 library depends on http components 4.3
> 
>
> Key: HADOOP-13044
> URL: https://issues.apache.org/jira/browse/HADOOP-13044
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 2.8.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13044.01.patch
>
>
> In case of using AWS SDK in the classpath of hadoop, we faced an issue caused 
> by incompatiblity of AWS SDK and httpcomponents.
> {code}
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
> {code}
> The latest AWS SDK depends on 4.3.x which has 
> [DefaultConnectionKeepAliveStrategy.INSTANCE|http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.html#INSTANCE].
>  This field is introduced from 4.3.
> This will allow us to avoid {{CLASSPATH}} confliction around httpclient 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13044) Amazon S3 library depends on http components 4.3

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252013#comment-15252013
 ] 

Steve Loughran commented on HADOOP-13044:
-

# HADOOP-12767 covers both the update and the extra deprecation warnings
# Hadoop 2.8 is (currently) building against aws-java-sdk-s3 versionb 1.10.6. 
Is this the version of the AWS which is exhibiting this problem, or are you 
testing against a more recent one?

> Amazon S3 library depends on http components 4.3
> 
>
> Key: HADOOP-13044
> URL: https://issues.apache.org/jira/browse/HADOOP-13044
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13044.01.patch
>
>
> In case of using AWS SDK in the classpath of hadoop, we faced an issue caused 
> by incompatiblity of AWS SDK and httpcomponents.
> {code}
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
> {code}
> The latest AWS SDK depends on 4.3.x which has 
> [DefaultConnectionKeepAliveStrategy.INSTANCE|http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.html#INSTANCE].
>  This field is introduced from 4.3.
> This will allow us to avoid {{CLASSPATH}} confliction around httpclient 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13044) Amazon S3 library depends on http components 4.3

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13044:

Component/s: fs/s3
 build
Summary: Amazon S3 library depends on http components 4.3  (was: 
Upgrade httpcomponents to avoid CLASSPATH confliction)

> Amazon S3 library depends on http components 4.3
> 
>
> Key: HADOOP-13044
> URL: https://issues.apache.org/jira/browse/HADOOP-13044
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13044.01.patch
>
>
> In case of using AWS SDK in the classpath of hadoop, we faced an issue caused 
> by incompatiblity of AWS SDK and httpcomponents.
> {code}
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
> {code}
> The latest AWS SDK depends on 4.3.x which has 
> [DefaultConnectionKeepAliveStrategy.INSTANCE|http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.html#INSTANCE].
>  This field is introduced from 4.3.
> This will allow us to avoid {{CLASSPATH}} confliction around httpclient 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12751:

Status: Patch Available  (was: Open)

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12751:

Status: Open  (was: Patch Available)

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13043) Add LICENSE.txt entries for bundled javascript dependencies

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252005#comment-15252005
 ] 

Steve Loughran commented on HADOOP-13043:
-

+1

> Add LICENSE.txt entries for bundled javascript dependencies
> ---
>
> Key: HADOOP-13043
> URL: https://issues.apache.org/jira/browse/HADOOP-13043
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-13043.001.patch, hadoop-13043.002.patch
>
>
> None of our bundled javascript dependencies are mentioned in LICENSE.txt. 
> Let's fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13042) Restore lost leveldbjni LICENSE and NOTICE changes

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252003#comment-15252003
 ] 

Steve Loughran commented on HADOOP-13042:
-

+1

> Restore lost leveldbjni LICENSE and NOTICE changes
> --
>
> Key: HADOOP-13042
> URL: https://issues.apache.org/jira/browse/HADOOP-13042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-13042.001.patch
>
>
> As noted on HADOOP-12893, we lost the leveldbjni related NOTICE and LICENSE 
> updates done in YARN-1704 when HADOOP-10956 was committed. Let's restore them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251999#comment-15251999
 ] 

Steve Loughran commented on HADOOP-12563:
-

been to busy too look. Let's get in and evolve it in place if needed

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, dtutil-test-out, 
> dtutil_diff_07_08, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent s3a configuration values and incorrect comments

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251996#comment-15251996
 ] 

Hadoop QA commented on HADOOP-12671:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12671 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779171/HADOOP-12671.000.patch
 |
| JIRA Issue | HADOOP-12671 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9140/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Inconsistent s3a configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251962#comment-15251962
 ] 

Steve Loughran commented on HADOOP-12982:
-

looks like HADOOP-12671 is similar here

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent s3a configuration values and incorrect comments

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12671:

Summary: Inconsistent s3a configuration values and incorrect comments  
(was: Inconsistent configuration values and incorrect comments)

> Inconsistent s3a configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12837) FileStatus.getModificationTime returns 0 for directories on S3n & S3a

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12837:

Priority: Minor  (was: Major)

> FileStatus.getModificationTime returns 0 for directories on S3n & S3a
> -
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>Priority: Minor
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12837) FileStatus.getModificationTime returns 0 for directories on S3n & S3a

2016-04-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12837:

Summary: FileStatus.getModificationTime returns 0 for directories on S3n & 
S3a  (was: FileStatus.getModificationTime not working on S3)

> FileStatus.getModificationTime returns 0 for directories on S3n & S3a
> -
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-13045) hadoop_add_classpath is not working in .hadooprc

2016-04-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251956#comment-15251956
 ] 

Allen Wittenauer edited comment on HADOOP-13045 at 4/21/16 2:22 PM:


Correct.  .hadooprc is the basic equivalent of the hadoop-env.sh file.  To 
quote the UnixShellGuide.md file: 

{code}
 This file is always read to initialize and override any variables that the 
user may want to customize.
{code}

There is no mention of the functions working inside of it.  None of the _HOME 
vars are guaranteed to be set either as a result.

In any case, HADOOP_USER_CLASSPATH should be used to do what you're trying to 
do.


was (Author: aw):
Correct.  .hadooprc is the basic equivalent of the hadoop-env.sh file.  To 
quote the UnixShellGuide.md file: 

{code}
 This file is always read to initialize and override any variables that the 
user may want to customize.
{code}

There is no mention of the functions working inside of it.  None of the _HOME 
vars are guaranteed to be set either as a result.

> hadoop_add_classpath is not working in .hadooprc
> 
>
> Key: HADOOP-13045
> URL: https://issues.apache.org/jira/browse/HADOOP-13045
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>
> {{hadoop_basic_function}} resets {{CLASSPATH}} after {{.hadooprc}} is called.
> {noformat}
> $ hadoop --debug version
> (snip)
> DEBUG: Applying the user's .hadooprc
> DEBUG: Initial CLASSPATH=/root/hadoop-tools-0.1-SNAPSHOT.jar
> DEBUG: Initialize CLASSPATH
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/build/native
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/lib/native
> DEBUG: Initial CLASSPATH=/usr/local/hadoop/share/hadoop/common/lib/*
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13045) hadoop_add_classpath is not working in .hadooprc

2016-04-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251956#comment-15251956
 ] 

Allen Wittenauer commented on HADOOP-13045:
---

Correct.  .hadooprc is the basic equivalent of the hadoop-env.sh file.  To 
quote the UnixShellGuide.md file: 

{code}
 This file is always read to initialize and override any variables that the 
user may want to customize.
{code}

There is no mention of the functions working inside of it.  None of the _HOME 
vars are guaranteed to be set either as a result.

> hadoop_add_classpath is not working in .hadooprc
> 
>
> Key: HADOOP-13045
> URL: https://issues.apache.org/jira/browse/HADOOP-13045
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>
> {{hadoop_basic_function}} resets {{CLASSPATH}} after {{.hadooprc}} is called.
> {noformat}
> $ hadoop --debug version
> (snip)
> DEBUG: Applying the user's .hadooprc
> DEBUG: Initial CLASSPATH=/root/hadoop-tools-0.1-SNAPSHOT.jar
> DEBUG: Initialize CLASSPATH
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/build/native
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/lib/native
> DEBUG: Initial CLASSPATH=/usr/local/hadoop/share/hadoop/common/lib/*
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12234) Web UI Framable Page

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251875#comment-15251875
 ] 

Hadoop QA commented on HADOOP-12234:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12234 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12746655/HADOOP-12234-v3-master.patch
 |
| JIRA Issue | HADOOP-12234 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9139/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Web UI Framable Page
> 
>
> Key: HADOOP-12234
> URL: https://issues.apache.org/jira/browse/HADOOP-12234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HADOOP-12234-v2-master.patch, 
> HADOOP-12234-v3-master.patch, HADOOP-12234.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.  
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251865#comment-15251865
 ] 

Hadoop QA commented on HADOOP-13046:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-assemblies in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-assemblies in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | 

[jira] [Commented] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251846#comment-15251846
 ] 

Larry McCay commented on HADOOP-13011:
--

Thanks, [~ste...@apache.org]!

I will commit this to branch-2, branch-2.8 and trunk today.

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch, HADOOP-13011-004.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12234) Web UI Framable Page

2016-04-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251816#comment-15251816
 ] 

Larry McCay commented on HADOOP-12234:
--

This is an issue that has come up for our users and I just spent a few days 
duplicating this work in https://issues.apache.org/jira/browse/HADOOP-13008. 

Can someone tell me why this patch has stalled?

A couple comments on the differences between the two implementations:

1. package of the class - mine is in the same package as CrossOriginFilter and 
RestCsrfPrevenetionFilter: org.apache.hadoop.security.http. I think that it 
makes sense to keep these web app security filters together. I don't really 
care for the "lib" in this package name but maybe this is an existing pattern 
in hadoop elsewhere?
2. configuration prefixes - in order to accommodate some ability for some 
components to override a global setting, I proposed the use of separate 
prefixes. A global one that would be used if a component specific one was not 
found. See the JIRA for comments around that.
3. filter initializer - it seems that this implementation has its own filter 
initializer where HADOOP-13008 introduces the filter and would rely on 
integration specific initializers which would be able to interrogate the 
prefixed configuration for each integration point.

I think that we should determine which implementation should be resolved as 
duplicate based on the sense of which one is closer to what we need and 
adjusting to accommodate the other. I don't have any problem discarding 
HADOOP-13008 but let's discuss here.

I would like to get this feature in as soon as we possibly can in order to 
address the needs and concerns of our customers/users.

> Web UI Framable Page
> 
>
> Key: HADOOP-12234
> URL: https://issues.apache.org/jira/browse/HADOOP-12234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HADOOP-12234-v2-master.patch, 
> HADOOP-12234-v3-master.patch, HADOOP-12234.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.  
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-04-21 Thread Teruyoshi Zenmyo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo updated HADOOP-13046:
--
Status: Patch Available  (was: Open)

> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
> Attachments: HADOOP-13046.patch
>
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-04-21 Thread Teruyoshi Zenmyo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo updated HADOOP-13046:
--
Attachment: HADOOP-13046.patch

> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
> Attachments: HADOOP-13046.patch
>
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-04-21 Thread Teruyoshi Zenmyo (JIRA)
Teruyoshi Zenmyo created HADOOP-13046:
-

 Summary: Fix hadoop-dist to adapt to HDFS client library separation
 Key: HADOOP-13046
 URL: https://issues.apache.org/jira/browse/HADOOP-13046
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Teruyoshi Zenmyo


Some build-related files should be updated to adapt to HDFS client library 
separation. There exist below issues.
- hdfs.h is not included.
- hadoop.component is not set in pom.xml of hdfs client libraries.
- hdfs-native-client is not include




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-13046) Fix hadoop-dist to adapt to HDFS client library separation

2016-04-21 Thread Teruyoshi Zenmyo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo reassigned HADOOP-13046:
-

Assignee: Teruyoshi Zenmyo

> Fix hadoop-dist to adapt to HDFS client library separation
> --
>
> Key: HADOOP-13046
> URL: https://issues.apache.org/jira/browse/HADOOP-13046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Teruyoshi Zenmyo
>Assignee: Teruyoshi Zenmyo
>
> Some build-related files should be updated to adapt to HDFS client library 
> separation. There exist below issues.
> - hdfs.h is not included.
> - hadoop.component is not set in pom.xml of hdfs client libraries.
> - hdfs-native-client is not include



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13044) Upgrade httpcomponents to avoid CLASSPATH confliction

2016-04-21 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251698#comment-15251698
 ] 

Masatake Iwasaki commented on HADOOP-13044:
---

Thanks for reporting this, [~lewuathe]. HADOOP-12767 seems to be relevant. Can 
you check that there is no problems on compilation and tests in hadoop-common 
and hadoop-yarn?

> Upgrade httpcomponents to avoid CLASSPATH confliction
> -
>
> Key: HADOOP-13044
> URL: https://issues.apache.org/jira/browse/HADOOP-13044
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13044.01.patch
>
>
> In case of using AWS SDK in the classpath of hadoop, we faced an issue caused 
> by incompatiblity of AWS SDK and httpcomponents.
> {code}
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
> {code}
> The latest AWS SDK depends on 4.3.x which has 
> [DefaultConnectionKeepAliveStrategy.INSTANCE|http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.html#INSTANCE].
>  This field is introduced from 4.3.
> This will allow us to avoid {{CLASSPATH}} confliction around httpclient 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13045) hadoop_add_classpath is not working in .hadooprc

2016-04-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251660#comment-15251660
 ] 

Akira AJISAKA commented on HADOOP-13045:


My .hadooprc
{code}
#!/usr/bin/env bash
hadoop_add_classpath /root/hadoop-tools-0.1-SNAPSHOT.jar
{code}

> hadoop_add_classpath is not working in .hadooprc
> 
>
> Key: HADOOP-13045
> URL: https://issues.apache.org/jira/browse/HADOOP-13045
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>
> {{hadoop_basic_function}} resets {{CLASSPATH}} after {{.hadooprc}} is called.
> {noformat}
> $ hadoop --debug version
> (snip)
> DEBUG: Applying the user's .hadooprc
> DEBUG: Initial CLASSPATH=/root/hadoop-tools-0.1-SNAPSHOT.jar
> DEBUG: Initialize CLASSPATH
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/build/native
> DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/lib/native
> DEBUG: Initial CLASSPATH=/usr/local/hadoop/share/hadoop/common/lib/*
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2016-04-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251657#comment-15251657
 ] 

Gergely Novák commented on HADOOP-12550:


What if we tried to solve the issue by explicitly checking if the distance file 
exists (on Windows) and if so delete it before the copy? Like this:
{code}
  if (Shell.WINDOWS && dst.exists()) {
dst.delete();
  }
  renameTo0(src.getAbsolutePath(), dst.getAbsolutePath());
{code}

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13045) hadoop_add_classpath is not working in .hadooprc

2016-04-21 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13045:
--

 Summary: hadoop_add_classpath is not working in .hadooprc
 Key: HADOOP-13045
 URL: https://issues.apache.org/jira/browse/HADOOP-13045
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira AJISAKA


{{hadoop_basic_function}} resets {{CLASSPATH}} after {{.hadooprc}} is called.
{noformat}
$ hadoop --debug version
(snip)
DEBUG: Applying the user's .hadooprc
DEBUG: Initial CLASSPATH=/root/hadoop-tools-0.1-SNAPSHOT.jar
DEBUG: Initialize CLASSPATH
DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/build/native
DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/local/hadoop/lib/native
DEBUG: Initial CLASSPATH=/usr/local/hadoop/share/hadoop/common/lib/*
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251603#comment-15251603
 ] 

Steve Loughran commented on HADOOP-13011:
-

+1

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch, HADOOP-13011-004.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12985) Support MetricsSource interface for DecayRpcScheduler Metrics

2016-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251375#comment-15251375
 ] 

Hudson commented on HADOOP-12985:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9641 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9641/])
HADOOP-12985. Support MetricsSource interface for DecayRpcScheduler (xyao: rev 
5bd7b592e5fbe4d448fd127c15d29f3121b8a833)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Metrics2Util.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> Support MetricsSource interface for DecayRpcScheduler Metrics
> -
>
> Key: HADOOP-12985
> URL: https://issues.apache.org/jira/browse/HADOOP-12985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-12985.00.patch, HADOOP-12985.02.patch, 
> HADOOP-12985.03.patch, HADOOP-12985.04.patch, HADOOP-12985.05.patch, 
> HADOOP-12985.06.patch
>
>
> This allows metrics collector such as AMS to collect it with MetricsSink. The 
> per user RPC call counts, schedule decisions and per priority response time 
> will be useful to detect and trouble shoot Hadoop RPC server such as Namenode 
> overload issues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12985) Support MetricsSource interface for DecayRpcScheduler Metrics

2016-04-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12985:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~jnp] for the review. I've commit the patch to trunk, branch-2 and 
branch-2.8.

> Support MetricsSource interface for DecayRpcScheduler Metrics
> -
>
> Key: HADOOP-12985
> URL: https://issues.apache.org/jira/browse/HADOOP-12985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-12985.00.patch, HADOOP-12985.02.patch, 
> HADOOP-12985.03.patch, HADOOP-12985.04.patch, HADOOP-12985.05.patch, 
> HADOOP-12985.06.patch
>
>
> This allows metrics collector such as AMS to collect it with MetricsSink. The 
> per user RPC call counts, schedule decisions and per priority response time 
> will be useful to detect and trouble shoot Hadoop RPC server such as Namenode 
> overload issues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >