[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305134#comment-15305134
 ] 

Andrew Wang commented on HADOOP-13155:
--

Thanks for working on this patch Xiao, and thanks Yongjun, Arun, and Wei-Chiu 
for weighing in. I had a few review comments:

* Mildly prefer to keep the newline at the top of KMSClientProvider
* Regarding moving the config key, I think the other Renewers get around this 
by embedding the static class within the parent class and accessing required 
state statically. I think the parent here would be KMSClientProvider. It 
wouldn't be good to tie renewal to this HDFS key anyway, since the KMS is used 
for more than just HDFS encryption.
* KMSClientProvider#addDelegationToken and cancelDT just pass a dummy {{url}} 
to the {{authUrl}} call. Why does renewal in particularly need a URL with 
USER_NAME set? IIUC this is needed for PseudoAuthentication, but here we're 
doing DT authentication?
* Extra newline in declaration of generateDT in KMSClientProvider
* In the new test in TestKMS, can we configure the Kerberos config in 
testDTOKerberized, and then pass the Configuration to testDelegationTokensOps? 
I think that's cleaner.
* Also recommend doubling the timeout Rule, since things often run slower on 
overloaded Jenkins servers and we don't want a new flake.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305123#comment-15305123
 ] 

Hadoop QA commented on HADOOP-13081:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 32s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.security.UserGroupInformation defines clone() but 
doesn't implement Cloneable  At UserGroupInformation.java:implement Cloneable  
At UserGroupInformation.java:[lines 631-634] |
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806779/HADOOP-13081.patch |
| JIRA Issue | HADOOP-13081 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f16841efe339 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4ca8859 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/whitespace-eol.txt
 |
| findbugs | 

[jira] [Commented] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305110#comment-15305110
 ] 

Andrew Wang commented on HADOOP-13132:
--

Wow, that is a pretty dirty trick for throwException. Thanks for digging in 
here Wei-Chiu.

Patch looks fine overall, but do we really need those new LOG lines? The caller 
will very likely have appropriate logic to handle the IOException.

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305103#comment-15305103
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

Simple patch. Can someone please review? (and also assign to me; looks like I 
don't have permissions to assign)

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-13081:
--
Attachment: HADOOP-13081.patch

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-05-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-13081:
--
Status: Patch Available  (was: Open)

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305101#comment-15305101
 ] 

Hudson commented on HADOOP-13197:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9882 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9882/])
HADOOP-13197. Add non-decayed call metrics for DecayRpcScheduler. (xyao: rev 
4ca8859583839761663fc1fc1de1b3ce2e3fc5b5)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13197:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~jnp] for the review. I committed the patch to trunk, branch-2 and 
branch-2.8 based on the +1. The only delta from v02 is comment fix. 



> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305081#comment-15305081
 ] 

Sangjin Lee commented on HADOOP-13070:
--

bq. I sure do wish Java 9 had something that would make it easier but I didn't 
see anything.

There is jigsaw (in java 9), but then there is always jigsaw. :)

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305075#comment-15305075
 ] 

Sangjin Lee edited comment on HADOOP-13070 at 5/28/16 1:28 AM:
---

Thanks for the comments [~raviprak]! To answer your questions...

{quote}
ApplicationClassLoader seems like its only being used by MR. I grepped in Tez 
and Spark source, and didn't find any instances. Even if we were to do this 
only for MR, it would be incredibly valuable. I feel it would also set a 
precedent / pattern that other frameworks can then leverage.
{quote}
If you meant that the only usage is hadoop itself, I believe that's correct. 
Within hadoop, there are 3 usages today: MR task class isolation, hadoop run 
jar class isolation, and more recently the NM aux service class isolation. 
Since {{ApplicationClassLoader}} is part of the public API, however, other 
frameworks can use it if they wish.

bq. If we were to focus on MR, do you know what are the common problematic 
conflicting dependencies?
Unfortunately there are many to choose from, and quite a few of the well-known 
ones fall into the problem category. Some of the more famous ones include guava 
and jackson to name a couple.

But isolating class spaces has more benefits than simply preventing collisions. 
Since we're afraid of breaking users, hadoop has been very slow/conservative in 
upgrading any libraries it uses. As a result, we're stuck in the stone age for 
many of the libraries we use. Isolation would give hadoop more freedom to 
upgrade its dependencies without worrying about impacting users. That is of 
course provided that the isolation mode becomes the default, which may still be 
some time away.

{quote}
One alternative approach would be to start 2 JVMs for each MR-task: an 
MR-framework JVM and an MR-task JVM. We would do all MR-framework specific work 
in the MR-framework JVM and send raw Map-Reduce input key-value pairs over a 
socket and read output key value pairs over a socket from the MR-task JVM. The 
MR specific code running in the MR-task JVM would then be minimal and only 
needs to read over the socket and call the user code.
{quote}
That is an interesting idea to solve this problem. I still worry about the 
performance implication it has. Also, it still would not eliminate the problem 
entirely. As you pointed out, even in that separate process you still need a 
minimal amount of hadoop code which then pulls in the needed dependencies.



was (Author: sjlee0):
Thanks for the comments [~raviprak]! To answer your questions...

{quote}
ApplicationClassLoader seems like its only being used by MR. I grepped in Tez 
and Spark source, and didn't find any instances. Even if we were to do this 
only for MR, it would be incredibly valuable. I feel it would also set a 
precedent / pattern that other frameworks can then leverage.
{quote}
If you meant that the only usage is hadoop itself, I believe that's correct. 
Within hadoop, there are 3 usages today: MR task class isolation, hadoop run 
jar class isolation, and more recently the NM aux service class isolation. 
Since {{ApplicationClassLoader}} is part of the public API, other frameworks 
can use it.

bq. If we were to focus on MR, do you know what are the common problematic 
conflicting dependencies?
Unfortunately there are many to choose from, and quite a few of the well-known 
ones fall into the problem category. Some of the more famous ones include guava 
and jackson to name a couple.

But isolating class spaces has more benefits than simply preventing collisions. 
Since we're afraid of breaking users, hadoop has been very slow/conservative in 
upgrading any libraries it uses. As a result, we're stuck in the stone age for 
many of the libraries we use. Isolation would give hadoop more freedom to 
upgrade its dependencies without worrying about impacting users. That is of 
course provided that the isolation mode becomes the default, which may still be 
some time away.

{quote}
One alternative approach would be to start 2 JVMs for each MR-task: an 
MR-framework JVM and an MR-task JVM. We would do all MR-framework specific work 
in the MR-framework JVM and send raw Map-Reduce input key-value pairs over a 
socket and read output key value pairs over a socket from the MR-task JVM. The 
MR specific code running in the MR-task JVM would then be minimal and only 
needs to read over the socket and call the user code.
{quote}
That is an interesting idea to solve this problem. I still worry about the 
performance implication it has. Also, it still would not eliminate the problem 
entirely. As you pointed out, even in that separate process you still need a 
minimal amount of hadoop code which then pulls in the needed dependencies.


> classloading isolation improvements for cleaner and stricter dependencies
> -
>
>   

[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305075#comment-15305075
 ] 

Sangjin Lee commented on HADOOP-13070:
--

Thanks for the comments [~raviprak]! To answer your questions...

{quote}
ApplicationClassLoader seems like its only being used by MR. I grepped in Tez 
and Spark source, and didn't find any instances. Even if we were to do this 
only for MR, it would be incredibly valuable. I feel it would also set a 
precedent / pattern that other frameworks can then leverage.
{quote}
If you meant that the only usage is hadoop itself, I believe that's correct. 
Within hadoop, there are 3 usages today: MR task class isolation, hadoop run 
jar class isolation, and more recently the NM aux service class isolation. 
Since {{ApplicationClassLoader}} is part of the public API, other frameworks 
can use it.

bq. If we were to focus on MR, do you know what are the common problematic 
conflicting dependencies?
Unfortunately there are many to choose from, and quite a few of the well-known 
ones fall into the problem category. Some of the more famous ones include guava 
and jackson to name a couple.

But isolating class spaces has more benefits than simply preventing collisions. 
Since we're afraid of breaking users, hadoop has been very slow/conservative in 
upgrading any libraries it uses. As a result, we're stuck in the stone age for 
many of the libraries we use. Isolation would give hadoop more freedom to 
upgrade its dependencies without worrying about impacting users. That is of 
course provided that the isolation mode becomes the default, which may still be 
some time away.

{quote}
One alternative approach would be to start 2 JVMs for each MR-task: an 
MR-framework JVM and an MR-task JVM. We would do all MR-framework specific work 
in the MR-framework JVM and send raw Map-Reduce input key-value pairs over a 
socket and read output key value pairs over a socket from the MR-task JVM. The 
MR specific code running in the MR-task JVM would then be minimal and only 
needs to read over the socket and call the user code.
{quote}
That is an interesting idea to solve this problem. I still worry about the 
performance implication it has. Also, it still would not eliminate the problem 
entirely. As you pointed out, even in that separate process you still need a 
minimal amount of hadoop code which then pulls in the needed dependencies.


> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305041#comment-15305041
 ] 

Wei-Chiu Chuang commented on HADOOP-13155:
--

What about using {{hadoop.security.key.provider.path}} instead of 
{{dfs.encryption.key.provider.uri}}? Reading some documentation, it seems both 
values should be configured the same. This way you don't need hdfs configs.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305042#comment-15305042
 ] 

Hadoop QA commented on HADOOP-13155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 12s 
{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806771/HADOOP-13155.04.patch 
|
| JIRA Issue | HADOOP-13155 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9611/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13155:
---
Attachment: HADOOP-13155.04.patch

Had an offline review with [~yzhangal], and patch 4 addressing his comments:

* {{KMSTokenRenewer}}, use its own logger
* Added more logs when {{KMSTokenRenewer}} found the keyProvider is not a DTExt 
instance
* Regarding the template usage when creating delegation tokens:
** The way of creating a new {{Token}} for 
{{DelegationTokenAuthenticatedURL$Token#setDelegationToken}} seems verbose. 
Since we're accepting a generic type, I think this is the safe way to go. 
Casting may end up throwing exceptions. I refactored KMSCP with a 
{{generateDelegationToken}} method to do this for both the renew and cancel.
** Also, constructing the Token using the 4 parameters seems non-optimal
However, I don't feel changing its copy constructor to accepting Token is a 
good idea... IIUC the template class Token is supposed to only accept {{T}}. 
For this reason, I didn't change anything. Feel free to comment if you think 
otherwise.

One thing Yongjun also brought up is the move of 
{{dfs.encryption.key.provider.uri}} from {{HdfsClientConfigKeys}} to 
{{CommonConfigurationKeys}}.
- The reason of this move is that the renewer is in common (and kms), hence we 
need the util method to create provider in common, hence the need of reading 
that config from common. 
- I left the name dfs.xxx for compatibility, but it's a bit weird to have a 
dfs.* in common configurations. Not sure what's the best way of handling this.. 
[~andrew.wang], do you have any advice on it? Thanks!

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305033#comment-15305033
 ] 

Jitendra Nath Pandey commented on HADOOP-13197:
---

Minor nit: The comment still has Top.0 prefix. 
{code}
// Key: Top.0.Caller(xyz).Volume and Top.0.Caller(xyz).Priority
private void addTopNCallerSummary(MetricsRecordBuilder rb) 
{code}

+1 for the latest patch. The patch is ok to commit with the above minor fix.



> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305019#comment-15305019
 ] 

Kengo Seki commented on HADOOP-13193:
-

Sorry for not explaining enough about qbt, and thanks for the additional 
explanation and committing Chris!

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: 2.8.0
>
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13105:
---
Affects Version/s: 3.0.0-alpha1

> Support timeouts in LDAP queries in LdapGroupsMapping.
> --
>
> Key: HADOOP-13105
> URL: https://issues.apache.org/jira/browse/HADOOP-13105
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, 
> HADOOP-13105.002.patch
>
>
> {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
> This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305007#comment-15305007
 ] 

Mingliang Liu commented on HADOOP-13105:


The failing test is not related, and I can't reproduce it locally.

> Support timeouts in LDAP queries in LdapGroupsMapping.
> --
>
> Key: HADOOP-13105
> URL: https://issues.apache.org/jira/browse/HADOOP-13105
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, 
> HADOOP-13105.002.patch
>
>
> {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
> This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305006#comment-15305006
 ] 

Sean Busbey commented on HADOOP-13070:
--

thanks for all the work so far [~sjlee0]! I'm planning to catch up on this work 
over the weekend.

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304989#comment-15304989
 ] 

Hadoop QA commented on HADOOP-13105:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806755/HADOOP-13105.002.patch
 |
| JIRA Issue | HADOOP-13105 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 8fa6b861650a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9610/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9610/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9610/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9610/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support timeouts in LDAP queries in LdapGroupsMapping.
> 

[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13105:
---
Attachment: HADOOP-13105.002.patch

The v2 patch fixes the checkstyle warnings.

> Support timeouts in LDAP queries in LdapGroupsMapping.
> --
>
> Key: HADOOP-13105
> URL: https://issues.apache.org/jira/browse/HADOOP-13105
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, 
> HADOOP-13105.002.patch
>
>
> {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
> This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304944#comment-15304944
 ] 

Allen Wittenauer commented on HADOOP-12756:
---

The jenkins servers should not be considered secure servers.  

There are several hundred people that have direct access to them via the 
jenkins ui and several thousand people via precommit.  Keep in mind that the 
whole point behind precommit is that it runs *arbitrary code*; a patch may 
contain any sort of change that may get installed and executed via maven.  All 
precommit jobs across all projects run as the same user so UNIX file 
permissions aren't going to help you here either.

In other words, if precommit has access to it, so does everyone else on the 
Internet.


> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304939#comment-15304939
 ] 

Hadoop QA commented on HADOOP-13207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 45s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
6 new + 46 unchanged - 28 fixed = 52 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 37s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 48s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.net.TestDNS |
| JDK v1.7.0_101 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806734/HADOOP-13207-branch-2-002.patch
 |
| JIRA Issue | HADOOP-13207 |
| Optional Tests |  asflicense  mvnsite  

[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304845#comment-15304845
 ] 

Hadoop QA commented on HADOOP-13197:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 38 unchanged - 3 fixed = 38 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 59s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 5s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806723/HADOOP-13197.02.patch 
|
| JIRA Issue | HADOOP-13197 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d8d807af2ac2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5ea6fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9608/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9608/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9608/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9608/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
>

[jira] [Commented] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304840#comment-15304840
 ] 

Hudson commented on HADOOP-13193:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9880 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9880/])
HADOOP-13193. Upgrade to Apache Yetus 0.3.0. Contributed by Kengo Seki. 
(cnauroth: rev da074771977fe3de8acea441b096291c96cf59d9)
* dev-support/bin/qbt
* dev-support/bin/yetus-wrapper


> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: 2.8.0
>
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-008.patch

Patch 008

This includes the HADOOP-13207 "specify listStatus and listFiles" work in 
hadoop-common; the tests there are based on the work here, pulling them up 
along with the test util classes to accompany them.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-27 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304825#comment-15304825
 ] 

Ravi Prakash commented on HADOOP-13070:
---

Hi Sangjin! Thanks for taking this up. I look forward to all your improvements.

{{ApplicationClassLoader}} seems like its only being used by MR. I grepped in 
Tez and Spark source, and didn't find any instances. Even if we were to do this 
only for MR, it would be incredibly valuable. I feel it would also set a 
precedent / pattern that other frameworks can then leverage.

If we were to focus on MR, do you know what are the common problematic 
conflicting dependencies? One alternative approach would be to start 2 JVMs for 
each MR-task: an MR-framework JVM and an MR-task JVM. We would do all 
MR-framework specific work in the MR-framework JVM and send raw Map-Reduce 
input key-value pairs over a socket and read output key value pairs over a 
socket from the MR-task JVM. The MR specific code running in the MR-task JVM 
would then be minimal and only needs to read over the socket and call the user 
code. I know protobuf (required for serialization / deserialization) is often 
the conflicting library, so it would be no help in that case. (We could still 
shade this minimal set of libraries.. although I personally dislike shading 
a lot). 

I sure do wish Java 9 had something that would make it easier but I didn't see 
anything.

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Status: Open  (was: Patch Available)

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304824#comment-15304824
 ] 

Steve Loughran commented on HADOOP-13171:
-

chris, thanks for the comments, I'll look at them next. For now, I've pulled 
some of the test utils stuff (including dir creation) up to the base FS 
contract code.

#  I'll look at the scale things, maybe cut them completely; for the base stuff 
I just hard coded a minimal directory depth/width and then shared it for all 
the listing operations against complex directories.

# regarding the temp file. we need something local.; I'll do it under the build 
dir.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304817#comment-15304817
 ] 

Elliott Clark commented on HADOOP-12974:


Tests are un-related.

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Patch Available  (was: Open)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Attachment: HADOOP-13207-branch-2-002.patch

Patch 002. The tests are added.

Add the list status/list located and listFiles tests.

It also tests the filtering operations, including that for the `protected 
listLocatedStatus(Path, PathFilter)` method. We could point at that and say 
"protected, not API, not needed", but the FilterFS delegates to it; in my tests 
I subclass that to expose it. So it can be uprated to being public, and given 
that FilterFS is in production code, we have to conclude its in use. 

Indeed: it may actually be something to make public, though that's not a 
concern here.

Testing: localfs, HDFS, s3a, azure. The object stores are slow to setup and 
teardown the complex directory tree scanning, so I had to merge all the checks 
there into one uber-test-case. Not ideal, as the first failure will hide the 
others, but as the tests all currently work, bearable. 

Note that this does appear to be the first addition of all the filtered list* 
calls to all FS contract tests. There's some calls made in 
{{TestFileInputFormat}} and elsewhere, but they're testing the splitting code, 
not whether the FS implementations generate the right data for the splitters.

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Open  (was: Patch Available)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13193:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk.  For consistency, I also committed it to 
branch-2 and branch-2.8 even though Allen feels those branches are vomitacious. 
 ;-)

[~sekikn], thank you for your work on the Yetus 0.3.0 release and bringing it 
here.

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: 2.8.0
>
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304776#comment-15304776
 ] 

Hadoop QA commented on HADOOP-12974:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 8s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806708/HADOOP-12974v5.patch |
| JIRA Issue | HADOOP-12974 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de2c9eb1cd39 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5ea6fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9607/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9607/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9607/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9607/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
>

[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13197:

Attachment: HADOOP-13197.02.patch

Thanks [~jnp] for the review. Attach a patch that removes the prefix as 
suggested. The public API {{getTotalCallVolume()}} is kept as-is to maintain 
backward compatibility.

> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Attachment: HADOOP-12974v5.patch

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304634#comment-15304634
 ] 

Hadoop QA commented on HADOOP-12974:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 34s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806686/HADOOP-12974v4.patch |
| JIRA Issue | HADOOP-12974 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 04be6cee3cf8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5ea6fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9606/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9606/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9606/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: 

[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304599#comment-15304599
 ] 

Jitendra Nath Pandey commented on HADOOP-13197:
---

One more minor comment. I think we can get rid of the TopN prefix from the 
metric name, because decayedCallerVolume lets us figure anyway which are the 
top users. The metrics would be easier to consume without that prefix.

> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Attachment: HADOOP-12974v4.patch

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-05-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304540#comment-15304540
 ] 

Elliott Clark commented on HADOOP-12974:


bq.I don't think you need to typecast the instance object.
Only caching instances have a stop.

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-12291:
--
Assignee: Esther Kundin

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-12291:
--
Assignee: (was: Anu Engineer)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304320#comment-15304320
 ] 

Esther Kundin commented on HADOOP-12291:


Interesting, but it's not letting me reassign it to myself either.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Anu Engineer
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304305#comment-15304305
 ] 

Anu Engineer commented on HADOOP-12291:
---

[~ekundin] JIRA was not letting me move the patch to open and patch available 
state again without being able to own the JIRA. I have picked the JIRA and 
hopefully jenkins will pick it up. Can you please assign this JIRA back to you 
? I am having difficulties doing that.



> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Anu Engineer
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-12291:
--
Status: Open  (was: Patch Available)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HADOOP-12291:
-

Assignee: Anu Engineer  (was: Esther Kundin)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Anu Engineer
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-12291:
--
Status: Patch Available  (was: Open)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Anu Engineer
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13202) Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might be changed

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304280#comment-15304280
 ] 

Hadoop QA commented on HADOOP-13202:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806648/HADOOP-13202.02.patch 
|
| JIRA Issue | HADOOP-13202 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 798bde9a32e3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e4022de |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9605/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9605/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9605/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9605/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might 
> be changed
> 
>
> Key: HADOOP-13202
>

[jira] [Commented] (HADOOP-13175) Remove hadoop-ant from hadoop-tools

2016-05-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304218#comment-15304218
 ] 

Jason Lowe commented on HADOOP-13175:
-

Sorry for the delay.  I haven't received final confirmation, but I believe this 
isn't going to be used internally by the time we move to 3.x.  Even if it is it 
wouldn't be a huge burden to maintain it ourselves, so I'm OK with this going 
into trunk if desired.

> Remove hadoop-ant from hadoop-tools
> ---
>
> Key: HADOOP-13175
> URL: https://issues.apache.org/jira/browse/HADOOP-13175
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
> Attachments: HADOOP-13175.001.patch
>
>
> The hadoop-ant code is an ancient kludge unlikely to have any users, still. 
> We can delete it from trunk as a "scream test" for 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: HADOOP-12847.010.branch-2.8.patch

Attach the branch-2.8 patch.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.8.patch, HADOOP-12847.010.branch-2.patch, 
> HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13202) Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might be changed

2016-05-27 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13202:

Attachment: HADOOP-13202.02.patch

> Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might 
> be changed
> 
>
> Key: HADOOP-13202
> URL: https://issues.apache.org/jira/browse/HADOOP-13202
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: zhengbing li
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13202.01.patch, HADOOP-13202.02.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Current implementation:
> return (vectorSize + 7) / 8;
> when vectorSize is 2147483647(the max value of Int), error 
> :"java.lang.NegativeArraySizeException" will report 
> the implementation might be changed
> return (int)(((long)vectorSize + 7) / 8);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13193:
---
Hadoop Flags: Reviewed

+1, based on my prior testing and +1 of the Yetus 0.3.0 release candidate.  For 
those who might be watching and wondering what the new {{qbt}} script is all 
about, that's a new feature of Yetus 0.3.0 called Quality Build Tool.  See 
YETUS-156 for details.

I can commit this later today.

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304159#comment-15304159
 ] 

stack commented on HADOOP-12910:


Copy/Paste of Deferred would work. It does callback, has a respectable ancestry 
(copy of TwistedPython pattern), a long, proven track record used by a few 
projects, it is well-documented, and a narrower API than CompletableFuture so 
less to implement. Downside (minor) is it is not like CompletableFuture.

bq. but if Future is sufficient for the current set of usecases, then let's 
go with this plan.

Future alone is not enough for HBase (and Kudu?). Need callback. Don't want to 
have to consume the async API in one way when going against H2 and then in 
another manner when on top of H3.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304118#comment-15304118
 ] 

John Zhuge commented on HADOOP-13079:
-

Could someone kindly code review the patch please?

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-05-27 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304088#comment-15304088
 ] 

Lars Francke commented on HADOOP-13209:
---

Out of curiosity: Why?
Political correctness?

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
> Attachments: HADOOP-13209.v01.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12215) Support LogLevel CLI in secure mode

2016-05-27 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt resolved HADOOP-12215.
---
Resolution: Duplicate

> Support LogLevel CLI in secure mode
> ---
>
> Key: HADOOP-12215
> URL: https://issues.apache.org/jira/browse/HADOOP-12215
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Currenty log Level CLI is not supported in secure mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12215) Support LogLevel CLI in secure mode

2016-05-27 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reopened HADOOP-12215:
---

Reopening to close as duplicate

> Support LogLevel CLI in secure mode
> ---
>
> Key: HADOOP-12215
> URL: https://issues.apache.org/jira/browse/HADOOP-12215
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Currenty log Level CLI is not supported in secure mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12215) Support LogLevel CLI in secure mode

2016-05-27 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt resolved HADOOP-12215.
---
Resolution: Fixed

> Support LogLevel CLI in secure mode
> ---
>
> Key: HADOOP-12215
> URL: https://issues.apache.org/jira/browse/HADOOP-12215
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Currenty log Level CLI is not supported in secure mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303890#comment-15303890
 ] 

Hadoop QA commented on HADOOP-12756:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 8s 
{color} | {color:red} root in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-aliyun in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 2s 
{color} | {color:red} hadoop-tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 6m 21s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 26s {color} 
| {color:red} root generated 1 new + 697 unchanged - 0 fixed = 698 total (was 
697) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s 
{color} | {color:red} root: The patch generated 115 new + 0 unchanged - 0 fixed 
= 115 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 0s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 156m 21s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 217m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapreduce.tools.TestCLI |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 

[jira] [Created] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-05-27 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13212:
-

 Summary: Provide an option to set the socket buffers in 
S3AFileSystem
 Key: HADOOP-13212
 URL: https://issues.apache.org/jira/browse/HADOOP-13212
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


It should be possible to provide hints about send/receive buffers to 
AmazonS3Client via ClientConfiguration. It would be good to expose these 
parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303819#comment-15303819
 ] 

Hadoop QA commented on HADOOP-13162:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
49s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
20s {color} | {color:green} root: The patch generated 0 new + 23 unchanged - 2 
fixed = 23 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 14s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 22s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303801#comment-15303801
 ] 

Hadoop QA commented on HADOOP-13207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 27s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806479/HADOOP-13207-branch-2-001.patch
 |
| JIRA Issue | HADOOP-13207 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 07ca7d27a5b1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 38ae595 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9604/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9604/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Patch Available  (was: In Progress)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13162:

Parent Issue: HADOOP-11694  (was: HADOOP-1169)

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303735#comment-15303735
 ] 

Hadoop QA commented on HADOOP-13193:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
14s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 45s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806590/HADOOP-13193.1.patch |
| JIRA Issue | HADOOP-13193 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 3466bbe00505 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bde819a |
| shellcheck | v0.4.4 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9603/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-27 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13162:
--
Attachment: HADOOP-13162-branch-2-004.patch

AWS endpoint:  ap-southeast-1

AWS results:
{noformat}
Results :

Tests in error:
  
TestS3AContractDistCp>AbstractContractDistCpTest.largeFilesToRemote:96->AbstractContractDistCpTest.largeFiles:176
 »
  TestS3ADeleteFilesOneByOne>TestS3ADeleteManyFiles.testBulkRenameAndDelete:103 
»
  TestS3ADeleteManyFiles.testBulkRenameAndDelete:103 »  test timed out after 
180...

Tests run: 228, Failures: 0, Errors: 3, Skipped: 7
{noformat}

rename d1/d2 d1/d4 throws filealready exists exception (with or without patch). 
However, with hadoop command operations it succeeds. Not sure if it has got 
anything to do with inconsistency. But I have removed that step of "rename 
rename d1/d2 s1/d4" as the exception is available earlier as well. 

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-13193:

Attachment: HADOOP-13193.1.patch

-01

* bump the version of Yetus to 0.3.0
* add qbt (quality build tool) command


> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-13193:

Status: Patch Available  (was: Open)

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-13193:
---

Assignee: Kengo Seki  (was: Allen Wittenauer)

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303713#comment-15303713
 ] 

Jitendra Nath Pandey edited comment on HADOOP-13197 at 5/27/16 7:39 AM:


{code} public long getTotalCallVolume() {code}
I think we should rename the method as well to getTotalDecayedCallVolume.

Otherwise, looks good to me. +1



was (Author: jnp):
bq. {{ public long getTotalCallVolume() }}
I think we should rename the method as well to getTotalDecayedCallVolume.

Otherwise, looks good to me. +1


> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2016-05-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303713#comment-15303713
 ] 

Jitendra Nath Pandey commented on HADOOP-13197:
---

bq. {{ public long getTotalCallVolume() }}
I think we should rename the method as well to getTotalDecayedCallVolume.

Otherwise, looks good to me. +1


> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303697#comment-15303697
 ] 

Yi Liu commented on HADOOP-12756:
-

{quote}
 I'd recommend a wider conversation on the dev mailing lists before filing any 
specific requests to infra.
{quote}
+1 for this.

Another thing for the "auth-keys.xml", currently we use the credential file 
instead of normal Hadoop configuration property, I think the reason is it's 
more secure and the user can control the linux file permissions of 
"auth-keys.xml".  Could we allow the normal Hadoop configuration property for 
the credentials too, then we can specify the credentials through mvn build 
command line which could be more easily supported by the INFRA.  While user can 
still use the "auth-keys.xml" in practice. 



> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303667#comment-15303667
 ] 

Yi Liu commented on HADOOP-12756:
-

Agree with [~cnauroth].  The credentials need to go somewhere accessible by 
each Jenkins host that runs a Hadoop pre-commit build.   

{code}
have a dedicated host (or vm) equipped with all these credentials and run all 
the tests daily
{code}
Kai, I think it's not to find a dedicated host, instead, we need to make the 
auth-keys.xml available on all the Jenkins hosts that run Hadoop pre-commit 
build.  Not sure whether it's easy to support this by the INFRA.

{code}
It seems these two files should not be included in source code, as what 
.gitingore has excluded. Maybe we can provide these two files separately?
{code}
[~lingzhou], please don't add the credentials in patch. It's unexpected.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303658#comment-15303658
 ] 

Hudson commented on HADOOP-13199:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9875 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9875/])
HADOOP-13199. Add doc for distcp -filters. (John Zhuge via Yongjun (yzhang: rev 
cfb860dee72a27382a26bf450bb8b16784aeebbb)
* hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303654#comment-15303654
 ] 

John Zhuge commented on HADOOP-13199:
-

Thanks [~yzhangal].

> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303649#comment-15303649
 ] 

Chris Nauroth commented on HADOOP-12756:


bq. I thought the new module could also follow the used pattern for the short 
term.

Yes, I agree.  I don't think a larger infra solution needs to be tied directly 
to this patch.

bq. A simply way to achieve so would be, have a dedicated host (or vm) equipped 
with all these credentials and run all the tests daily.

This would be nice, but I think pre-commit would be the big win for the 
community.  That would save a lot of time for those of us currently doing long 
test runs on our dev machines verifying patches on those modules.  I'd 
recommend a wider conversation on the dev mailing lists before filing any 
specific requests to infra.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12756:
---
Status: Patch Available  (was: Open)

Submitted the patch to trigger the building.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-05-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303636#comment-15303636
 ] 

Kai Zheng commented on HADOOP-12756:


Thanks [~cnauroth] for documenting the current situations for existing similar 
modules. It's very helpful!
I thought the new module could also follow the used pattern for the short term.

bq. I expect it will take coordination with the Apache infra team to get this 
done correctly.
A simply way to achieve so would be, have a dedicated host (or vm) equipped 
with all these credentials and run all the tests daily. If this sounds good to 
go, I can fire a INFRA jira asking for the support.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13199:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, and branch-2.8.

Thanks John for the contribution.


> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303627#comment-15303627
 ] 

Hudson commented on HADOOP-12911:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9874 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9874/])
HADOOP-12911. Upgrade Hadoop MiniKDC with Kerby. Contributed by Jiajia 
(kai.zheng: rev 916140604ffef59466ba30832478311d3e6249bd)
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java
* hadoop-common-project/hadoop-auth/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/krb5.conf
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc.ldiff
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* hadoop-common-project/hadoop-common/src/test/resources/krb5.conf
* 
hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
* hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc-krb5.conf
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/krb5.conf
* hadoop-common-project/hadoop-minikdc/pom.xml


> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-27 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303626#comment-15303626
 ] 

Jiajia Li commented on HADOOP-12911:


Thanks all the reviewers for the great comments and suggestions, that really 
help me a lot.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303622#comment-15303622
 ] 

Hadoop QA commented on HADOOP-13199:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 34s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806575/HADOOP-13199.003.patch
 |
| JIRA Issue | HADOOP-13199 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a00af11de0ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34cc21f |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9600/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303620#comment-15303620
 ] 

Hadoop QA commented on HADOOP-13209:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 51s 
{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 27s 
{color} | {color:red} root: The patch generated 28 new + 186 unchanged - 27 
fixed = 214 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
15s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 56s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 59s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 12s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 11s {color} 
| {color:red} hadoop-mapreduce-client-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 145m 30s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
45s {color} | {color:green} The patch does not generate ASF License 

[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-12847:
---
Labels: supportability  (was: )

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.patch, HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12911:
---
  Resolution: Fixed
Target Version/s: 3.0.0-alpha1  (was: )
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~jiajia] for this great contribution, 
[~ste...@apache.org] and [~andrew.wang] for the reviewing!

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-12847:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~jojochuang] for the contribution, and other folks for reviewing.


> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.9.0
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.patch, HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303612#comment-15303612
 ] 

Hadoop QA commented on HADOOP-13199:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806575/HADOOP-13199.003.patch
 |
| JIRA Issue | HADOOP-13199 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 204ba3b0c39b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34cc21f |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9599/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303605#comment-15303605
 ] 

Yongjun Zhang commented on HADOOP-13199:


Thanks [~jzhuge]. +1, will commit soon.


> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303603#comment-15303603
 ] 

Hudson commented on HADOOP-12847:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9873 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9873/])
HADOOP-12847. hadoop daemonlog should support https and SPNEGO for (yzhang: rev 
34cc21f6d1a293d92613defba38e8ae810db4c71)
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java


> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.patch, HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters

2016-05-27 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13199:

Attachment: HADOOP-13199.003.patch

Patch 003:
* Minor change in wording from patch 002

> Add doc for distcp -filters
> ---
>
> Key: HADOOP-13199
> URL: https://issues.apache.org/jira/browse/HADOOP-13199
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13199.001.patch, HADOOP-13199.002.patch, 
> HADOOP-13199.003.patch
>
>
> Update distcp doc to reflect -filters option added by HADOOP-1540.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303602#comment-15303602
 ] 

Kai Zheng commented on HADOOP-12911:


Thanks Jiajia. Yes I see we have a comment about this and we can get rid of the 
work around when given next Kerby revision. 

+1 on latest patch and will commit it shortly.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org