[jira] [Assigned] (HADOOP-13640) Fix findbugs warning in VersionInfoMojo.java

2016-09-22 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-13640:
---

Assignee: Yuanbo Liu

> Fix findbugs warning in VersionInfoMojo.java
> 
>
> Key: HADOOP-13640
> URL: https://issues.apache.org/jira/browse/HADOOP-13640
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Yuanbo Liu
>
> Reported by Arpit on HADOOP-13602
> {quote}
> [INFO] 
> org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo.getSvnUriInfo(String)
>  uses String.indexOf(String) instead of String.indexOf(int) 
> ["org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo"] At 
> VersionInfoMojo.java:[lines 49-341]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13641) Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation

2016-09-22 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HADOOP-13641:
--
Attachment: HADOOP-13641.1.patch

I attached my first patch available for this and please help to review it.

The patch just extract the if statement and return at beginning. There is no 
logical change in the thread implementation.

> Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation
> ---
>
> Key: HADOOP-13641
> URL: https://issues.apache.org/jira/browse/HADOOP-13641
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Huafeng Wang
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13641.1.patch
>
>
> From [~drankye]'s comment in HADOOP-13590:
> Could we return earlier at the beginning so we can avoid at least 2 level of 
> indents and make the whole block more readable?
> {code}
>   /**Spawn a thread to do periodic renewals of kerberos credentials*/
>   private void spawnAutoRenewalThreadForUserCreds() {
> if (isSecurityEnabled()) {
>   //spawn thread only if we have kerb credentials
>   if (user.getAuthenticationMethod() == AuthenticationMethod.KERBEROS &&
>   !isKeytab) {
> ...
> ...
>  very deep nested ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515170#comment-15515170
 ] 

Hadoop QA commented on HADOOP-13632:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
7s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829971/HADOOP-13632.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux a058aa9c28cd 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d0372dc |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10578/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10578/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13632.001.patch
>
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515123#comment-15515123
 ] 

Suraj Acharya edited comment on HADOOP-13317 at 9/23/16 2:15 AM:
-

* The KMS does not support any other cipher other than AES/CTR in the current 
implementation. One can change the cipher in core-site.xml but that will throw 
an error since AES/CTR has been hardcoded. 
* I havent put some information in the logs because of either sensitive matter 
or access control. Putting material of a key is an information leak. Also, it 
is an information leak to print out the metadata and other information while 
being returned. I have logged mostly the incoming request information and the 
reason is the same.
* Also, I didnt wish to put information where ACLs protect transaction.
* I now get what you are saying about the exceptions. I think we should make 
that as a separate effort for the KMS. The reason being we will need to know 
the exceptions we wish to handle.


was (Author: sacharya):
* The KMS does not support any other cipher other than AES/CTR in the current 
implementation. One can change the cipher in core-site.xml but that will throw 
an error since AES/CTR has been hardcoded. 
* I havent put some information in the logs because of either sensitive matter 
or access control. Putting material of a key is an information leak. Also, it 
is an information leak to print out the metadata and other information while 
being returned. I have logged mostly the incoming request information and the 
reason is the same.
* Also, I didnt wish to put information where ACLs protect transaction.
* I know get what you are saying about the exceptions. I think we should make 
that as a separate effort for the KMS. The reason being we will need to know 
the exceptions we wish to handle.

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515124#comment-15515124
 ] 

Suraj Acharya commented on HADOOP-13317:


* The KMS does not support any other cipher other than AES/CTR in the current 
implementation. One can change the cipher in core-site.xml but that will throw 
an error since AES/CTR has been hardcoded. 
* I havent put some information in the logs because of either sensitive matter 
or access control. Putting material of a key is an information leak. Also, it 
is an information leak to print out the metadata and other information while 
being returned. I have logged mostly the incoming request information and the 
reason is the same.
* Also, I didnt wish to put information where ACLs protect transaction.
* I know get what you are saying about the exceptions. I think we should make 
that as a separate effort for the KMS. The reason being we will need to know 
the exceptions we wish to handle.

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13317:
---
Comment: was deleted

(was: * The KMS does not support any other cipher other than AES/CTR in the 
current implementation. One can change the cipher in core-site.xml but that 
will throw an error since AES/CTR has been hardcoded. 
* I havent put some information in the logs because of either sensitive matter 
or access control. Putting material of a key is an information leak. Also, it 
is an information leak to print out the metadata and other information while 
being returned. I have logged mostly the incoming request information and the 
reason is the same.
* Also, I didnt wish to put information where ACLs protect transaction.
* I know get what you are saying about the exceptions. I think we should make 
that as a separate effort for the KMS. The reason being we will need to know 
the exceptions we wish to handle.)

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515123#comment-15515123
 ] 

Suraj Acharya commented on HADOOP-13317:


* The KMS does not support any other cipher other than AES/CTR in the current 
implementation. One can change the cipher in core-site.xml but that will throw 
an error since AES/CTR has been hardcoded. 
* I havent put some information in the logs because of either sensitive matter 
or access control. Putting material of a key is an information leak. Also, it 
is an information leak to print out the metadata and other information while 
being returned. I have logged mostly the incoming request information and the 
reason is the same.
* Also, I didnt wish to put information where ACLs protect transaction.
* I know get what you are saying about the exceptions. I think we should make 
that as a separate effort for the KMS. The reason being we will need to know 
the exceptions we wish to handle.

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13632:
-
Attachment: HADOOP-13632.001.patch

Here's a patch which moves us over to {{hadoop_status_daemon}}. Tested manually 
with an empty config that causes the NN to abort quickly. I left out the error 
message, but I can add it if you think it doesn't hurt.

The timing condition is quite fine though. If I instead use a valid config but 
an unformatted namedir so it dies later during NN initialization, it doesn't 
trigger.

Since this is a pretty common error, we could try and catch this by extending 
the timer loop. I remember talking to a Cloudera Manager engineer who maintains 
a similar startup script, and CM waits for longer than 5s (I think 30s?) to 
confirm that the process is still alive.

Thoughts?

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
> Attachments: HADOOP-13632.001.patch
>
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13632:
-
Assignee: Andrew Wang
  Status: Patch Available  (was: Open)

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13632.001.patch
>
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13644) Replace config key literal strings with config key names

2016-09-22 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13644:
--

 Summary: Replace config key literal strings with config key names 
 Key: HADOOP-13644
 URL: https://issues.apache.org/jira/browse/HADOOP-13644
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Chen Liang
Priority: Minor


There are some places that use config key literal strings instead of config key 
names, e.g.
{code:title=IOUtils.java}
copyBytes(in, out, conf.getInt("io.file.buffer.size", 4096), true);
{code}

We should replace places like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13644) Replace config key literal strings with config key names

2016-09-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13644:
---
Component/s: conf

> Replace config key literal strings with config key names 
> -
>
> Key: HADOOP-13644
> URL: https://issues.apache.org/jira/browse/HADOOP-13644
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Mingliang Liu
>Assignee: Chen Liang
>Priority: Minor
>
> There are some places that use config key literal strings instead of config 
> key names, e.g.
> {code:title=IOUtils.java}
> copyBytes(in, out, conf.getInt("io.file.buffer.size", 4096), true);
> {code}
> We should replace places like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514975#comment-15514975
 ] 

Hadoop QA commented on HADOOP-13643:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829959/HADOOP-13643.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a0f5885a56fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5ffd4b7 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10577/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10577/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, n

[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514962#comment-15514962
 ] 

Xiao Chen commented on HADOOP-13590:


Thank you for the prompt reviews Kai!
Will wait for other comments from the audience, and make sure the getEndTime 
get addressed in the final patch.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514936#comment-15514936
 ] 

Kai Zheng commented on HADOOP-13590:


Thanks [~xiaochen] for the update! It looks good to me now. +1 from me.

If any chance to process others' reviewing comments in the following, a very 
minor to address, {{tgt.getEndTime().getTime()}} repeated some times in the 
{{catch}} block.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514930#comment-15514930
 ] 

Hadoop QA commented on HADOOP-13627:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 97 unchanged - 1 fixed = 98 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13627 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829954/HADOOP-13627.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a9c99b964e3b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fc632a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10576/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10576/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10576/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
>   

[jira] [Updated] (HADOOP-13641) Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation

2016-09-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13641:
---
Assignee: Huafeng Wang

> Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation
> ---
>
> Key: HADOOP-13641
> URL: https://issues.apache.org/jira/browse/HADOOP-13641
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Huafeng Wang
>Priority: Minor
>  Labels: newbie
>
> From [~drankye]'s comment in HADOOP-13590:
> Could we return earlier at the beginning so we can avoid at least 2 level of 
> indents and make the whole block more readable?
> {code}
>   /**Spawn a thread to do periodic renewals of kerberos credentials*/
>   private void spawnAutoRenewalThreadForUserCreds() {
> if (isSecurityEnabled()) {
>   //spawn thread only if we have kerb credentials
>   if (user.getAuthenticationMethod() == AuthenticationMethod.KERBEROS &&
>   !isKeytab) {
> ...
> ...
>  very deep nested ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13642) Move RecordFactory from YARN to Common

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514891#comment-15514891
 ] 

Hadoop QA commented on HADOOP-13642:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 75 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
38s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 38s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 43s{color} | {color:orange} root: The patch generated 23 new + 4979 
unchanged - 23 fixed = 5002 total (was 5002) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 9 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 14s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 22s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-yarn-server-web-proxy in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {c

[jira] [Commented] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514873#comment-15514873
 ] 

Xiao Chen commented on HADOOP-13317:


Thanks Suraj for revving. Overall looks good.

Some more comments:
- From my previous comment: for createKey, is it safe to put cipher in the logs?
- Propose to log all params in debug log. If we want to look at debug log, 
things are pretty much not what we thought to be. So more information won't 
harm.
- Sorry I may not be clear in {{when the underlying provider throws an 
exception, it just propagates into tomcat and we ends up seeing nothing in the 
KMS log}}. Take {{createKey}} for example. If {{provider.createKey}} or 
{{provider.flush}} throws, would we see anything in KMS log? Last time I ended 
up adding a try-catch to the entire method, not sure if there's a better way. 
You can try it out by hard-coding that block to throw and check the log. 
Another advantage is we can also add a trace level exit log, symmetric to the 
entering one.

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514846#comment-15514846
 ] 

Aaron Fabbri commented on HADOOP-13643:
---

Ping [~ste...@apache.org].. easy code review.

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514839#comment-15514839
 ] 

Aaron Fabbri commented on HADOOP-13643:
---

Hat tip to [~mackrorysd] for spotting this one.

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13643:
--
Status: Patch Available  (was: Open)

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13643:
--
Attachment: HADOOP-13643.001.patch

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13643.001.patch
>
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13643) Math error in AbstractContractDistCpTest

2016-09-22 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri moved HDFS-10890 to HADOOP-13643:
--

Affects Version/s: (was: 2.8.0)
   2.8.0
 Target Version/s:   (was: 2.9.0)
  Component/s: (was: distcp)
  Key: HADOOP-13643  (was: HDFS-10890)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Math error in AbstractContractDistCpTest
> 
>
> Key: HADOOP-13643
> URL: https://issues.apache.org/jira/browse/HADOOP-13643
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
>
> There is a minor math error in AbstractContractDistCpTest when calculating 
> file size:
> {code}
> int fileSizeMb = fileSizeKb * 1024;
> {code}
> This should be division, not multiplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-09-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13627:
---
Attachment: HADOOP-13627.02.patch

Thanks [~ste...@apache.org] for the review and suggestions! I wasn't aware of 
{{PathIOException}} or {{FsExceptionMessages}}, helpful to know.

Patch 2 to accommodate all of them.

I should mention that, with #4, the exception messages may change slightly. 
- Added username to exception in {{getUGIFromTicketCache}}.
- Some places have 'user:' before username.

But I don't think our compat rules restrict this. The message changes shouldn't 
impact someone from googling the exception, which is the main concern.

Also pasting below some output FYI:
before:
{quote}java.io.IOException: Login failure for foo from keytab 
/var/folders/6l/7hfzdv912jvclwrzyfndwjn8gp/T/junit1826438682419772260/foo.keytab:
 javax.security.auth.login.LoginException: _{quote}
after:
{quote}org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: foo from keytab 
/var/folders/6l/7hfzdv912jvclwrzyfndwjn8gp/T/junit2928287392078972940/foo.keytab
 javax.security.auth.login.LoginException: _
{quote}

before:
{quote}
java.io.IOException: failure to login using ticket cache file cache
{quote}
after:
{quote}
org.apache.hadoop.security.KerberosAuthException: failure to login: for user: 
user using ticket cache file: cache javax.security.auth.login.LoginException: 
Unable to obtain Principal Name for authentication 
{quote}

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Attachments: HADOOP-13627.01.patch, HADOOP-13627.02.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514715#comment-15514715
 ] 

Wei-Chiu Chuang commented on HADOOP-13535:
--

+1 Will commit by end of week unless there are other comments. Thanks

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch, 
> HADOOP-13535.003.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-09-22 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514586#comment-15514586
 ] 

Dima Spivak commented on HADOOP-13397:
--

This is an important point, [~ozawa]. On the HBase side, we should be okay with 
legal because we don't distribute any images we create; they are used purely in 
our automation and never pushed to a user-accessible Registry. Definitely 
something to watch out for, though.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514579#comment-15514579
 ] 

Wei-Chiu Chuang commented on HADOOP-12974:
--

Hello [~eclark] would you mind if I rebase it and make a small update to the 
patch you posted?
Thanks!

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch, HADOOP-12974v4.patch, 
> HADOOP-12974v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13548) Remove recursive dependencies of credential providers in LdapGroupsMapping

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514574#comment-15514574
 ] 

Wei-Chiu Chuang commented on HADOOP-13548:
--

This fix is certainly not as crisp as I'd like it to be, but it solves the 
problem. [~anu] [~lmccay] may I ask for your review? Thanks!

> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --
>
> Key: HADOOP-13548
> URL: https://issues.apache.org/jira/browse/HADOOP-13548
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13548.001.patch, HADOOP-13548.002.patch
>
>
> HADOOP-11934 discovered an infinite loop of dependencies in the use of 
> credential provider in LdapGroupsMapping. It added a new localjceks:// URI to 
> workaround the problem. The assumption is that the groups mapping is used 
> only in NameNode and that using a local credential file is not a problem.
> However, there are cases where Hadoop clients, such as Sqoop, may use hdfs:// 
> based credential provider and use LdapGroupsMapping at the same time. We 
> should use HADOOP-12846 to exclude hdfs:// URI credential providers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13638) KMS should set UGI's Configuration object properly

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514482#comment-15514482
 ] 

Wei-Chiu Chuang commented on HADOOP-13638:
--

Checkstyle warnings can't be removed unless we refactor test methods.

> KMS should set UGI's Configuration object properly
> --
>
> Key: HADOOP-13638
> URL: https://issues.apache.org/jira/browse/HADOOP-13638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13638.001.patch, HADOOP-13638.002.patch
>
>
> We found that the Configuration object in UGI in KMS server is not 
> initialized properly, therefore it does not load core-site.xml from 
> {{KMSConfiguration.KMS_CONFIG_DIR}}.
> This becomes a problem when the Hadoop cluster uses LdapGroupsMapping for 
> group resolution, because the UGI in KMS falls back to the default 
> JniBasedUnixGroupsMappingWithFallback (defined in core-default.xml) and is 
> thus not consistent with the Hadoop cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13642) Move RecordFactory from YARN to Common

2016-09-22 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13642:
-
Status: Patch Available  (was: Open)

> Move RecordFactory from YARN to Common
> --
>
> Key: HADOOP-13642
> URL: https://issues.apache.org/jira/browse/HADOOP-13642
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: HADOOP-13642.000.patch
>
>
> Some of the RecordFactory could be moved from YARN to Common for easier 
> sharing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13642) Move RecordFactory from YARN to Common

2016-09-22 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13642:
-
Attachment: HADOOP-13642.000.patch

Proposal. It touches a lot of files but caused by the import.

> Move RecordFactory from YARN to Common
> --
>
> Key: HADOOP-13642
> URL: https://issues.apache.org/jira/browse/HADOOP-13642
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: HADOOP-13642.000.patch
>
>
> Some of the RecordFactory could be moved from YARN to Common for easier 
> sharing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13638) KMS should set UGI's Configuration object properly

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514321#comment-15514321
 ] 

Hadoop QA commented on HADOOP-13638:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 3 new + 114 unchanged - 3 fixed = 117 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13638 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829906/HADOOP-13638.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4f77a4ead812 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10574/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10574/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10574/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMS should set UGI's Configuration object properly
> --
>
> Key: HADOOP-13638
> URL: https://issues.apache.org/jira/browse/HADOOP-13638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assign

[jira] [Comment Edited] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514317#comment-15514317
 ] 

Allen Wittenauer edited comment on HADOOP-13632 at 9/22/16 7:52 PM:


We're basically racing against the process startup time and subsequent failure. 
We might pass that ps but still fail the renice, disown, or subsequent ps 
check.  That said, it wouldn't hurt to put another ps check after the timer and 
before the pid file write to catch hopefully a good chunk of the early failures.

The outfile may or may not be the correct file to look at, BTW. e.g., 
fs.defaultFS pointing to file: will leave the out file empty.

Two Sidenotes: 

* I wonder why this code doesn't use hadoop_status_daemon.  I'm sure there is a 
good reason including that it was probably written before that function 
existed.  It probably should use it though so that we take advantage of 
whatever features someone makes if they replace it.  On the flip side, this 
code is extremely time critical (racey!) so the faster we are at completing, 
the better.

* This is some of my least favorite code that I've written.  Handling pid files 
outside of a daemon is full of fragility even outside of the edge cases. :(


was (Author: aw):
We're basically racing against the process startup time and subsequent failure. 
We might pass that ps but still fail the renice, disown, or subsequent ps 
check.  That said, it wouldn't hurt to put another ps check after the timer and 
before the pid file write to catch hopefully a good chunk of the early failures.

Two Sidenotes: 

* I wonder why this code doesn't use hadoop_status_daemon.  I'm sure there is a 
good reason including that it was probably written before that function 
existed.  It probably should use it though so that we take advantage of 
whatever features someone makes if they replace it.  On the flip side, this 
code is extremely time critical (racey!) so the faster we are at completing, 
the better.

* This is some of my least favorite code that I've written.  Handling pid files 
outside of a daemon is full of fragility even outside of the edge cases. :(

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514317#comment-15514317
 ] 

Allen Wittenauer commented on HADOOP-13632:
---

We're basically racing against the process startup time and subsequent failure. 
We might pass that ps but still fail the renice, disown, or subsequent ps 
check.  That said, it wouldn't hurt to put another ps check after the timer and 
before the pid file write to catch hopefully a good chunk of the early failures.

Two Sidenotes: 

* I wonder why this code doesn't use hadoop_status_daemon.  I'm sure there is a 
good reason including that it was probably written before that function 
existed.  It probably should use it though so that we take advantage of 
whatever features someone makes if they replace it.  On the flip side, this 
code is extremely time critical (racey!) so the faster we are at completing, 
the better.

* This is some of my least favorite code that I've written.  Handling pid files 
outside of a daemon is full of fragility even outside of the edge cases. :(

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13642) Move RecordFactory from YARN to Common

2016-09-22 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514205#comment-15514205
 ] 

Inigo Goiri commented on HADOOP-13642:
--

In HDFS-10882, we are using ProtoBuf as in YARN-2915. However, YARN has the 
RecordFactory to locate the PBImpl classes. As this functionality could be 
common to both HDFS and YARN, I'm proposing to move it to Commons.

> Move RecordFactory from YARN to Common
> --
>
> Key: HADOOP-13642
> URL: https://issues.apache.org/jira/browse/HADOOP-13642
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Inigo Goiri
>
> Some of the RecordFactory could be moved from YARN to Common for easier 
> sharing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13642) Move RecordFactory from YARN to Common

2016-09-22 Thread Inigo Goiri (JIRA)
Inigo Goiri created HADOOP-13642:


 Summary: Move RecordFactory from YARN to Common
 Key: HADOOP-13642
 URL: https://issues.apache.org/jira/browse/HADOOP-13642
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Inigo Goiri


Some of the RecordFactory could be moved from YARN to Common for easier sharing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514149#comment-15514149
 ] 

Hadoop QA commented on HADOOP-13590:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829894/HADOOP-13590.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e33cf06f9d6f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d619b4 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10573/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10573/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10573/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Componen

[jira] [Comment Edited] (HADOOP-13638) KMS should set UGI's Configuration object properly

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514119#comment-15514119
 ] 

Wei-Chiu Chuang edited comment on HADOOP-13638 at 9/22/16 6:40 PM:
---

v02: The Configuration object is shared by both KMS client and server in unit 
tests because UGI gets/sets it to a static variable. I don't see a way to 
isolate client's configuration from server. 

Fortunately, UGI-sensitive configuration names are independent between KMS 
client and server. As a workaround, make sure the client configurations are 
copied to the server's so that client can read them.


was (Author: jojochuang):
v02: The Configuration object is shared by both KMS client and server in unit 
tests because UGI gets/sets it to a static variable.

As a workaround, make sure the client configurations are copied to the server's 
so that client can read them.

> KMS should set UGI's Configuration object properly
> --
>
> Key: HADOOP-13638
> URL: https://issues.apache.org/jira/browse/HADOOP-13638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13638.001.patch, HADOOP-13638.002.patch
>
>
> We found that the Configuration object in UGI in KMS server is not 
> initialized properly, therefore it does not load core-site.xml from 
> {{KMSConfiguration.KMS_CONFIG_DIR}}.
> This becomes a problem when the Hadoop cluster uses LdapGroupsMapping for 
> group resolution, because the UGI in KMS falls back to the default 
> JniBasedUnixGroupsMappingWithFallback (defined in core-default.xml) and is 
> thus not consistent with the Hadoop cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13638) KMS should set UGI's Configuration object properly

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13638:
-
Attachment: HADOOP-13638.002.patch

v02: The Configuration object is shared by both KMS client and server in unit 
tests because UGI gets/sets it to a static variable.

As a workaround, make sure the client configurations are copied to the server's 
so that client can read them.

> KMS should set UGI's Configuration object properly
> --
>
> Key: HADOOP-13638
> URL: https://issues.apache.org/jira/browse/HADOOP-13638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13638.001.patch, HADOOP-13638.002.patch
>
>
> We found that the Configuration object in UGI in KMS server is not 
> initialized properly, therefore it does not load core-site.xml from 
> {{KMSConfiguration.KMS_CONFIG_DIR}}.
> This becomes a problem when the Hadoop cluster uses LdapGroupsMapping for 
> group resolution, because the UGI in KMS falls back to the default 
> JniBasedUnixGroupsMappingWithFallback (defined in core-default.xml) and is 
> thus not consistent with the Hadoop cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-09-22 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514052#comment-15514052
 ] 

churro morales commented on HADOOP-13578:
-

i was wondering if people were interested in getting zstandard integrated into 
the hadoop-mapreduce-client-nativetask?  That is my only task left for trunk 
(since that code only resides there).  I have backports ready for the 2.7 and 
2.6 branches as well.  Maybe we can split that feature up into a separate jira 
ticket?  If thats the case I can have patches up very soon.  Was just wondering 
everyone's thoughts here?

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
Attachment: HADOOP-13590.06.patch

Thanks for the review and feedback Kai!

Patch 6 attached, addressed your comments, with the exception of:
bq. 1. show now and renewalFailures values in the warning log?
Updated the warning log and removed the debug log in the inner call.
bq. 3. Could we return earlier at the beginning so we can avoid at least 2 
level of indents and make the whole block more readable
Agreed. Feels we should do this separately, created HADOOP-13641.
bq. 4.Just a question. Any other exception than IOException could be thrown 
there?
Checking line by line, I think only IOE and {{InterruptedException}} can be 
thrown.


> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13641) Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation

2016-09-22 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13641:
--

 Summary: Update UGI#spawnAutoRenewalThreadForUserCreds to reduce 
indentation
 Key: HADOOP-13641
 URL: https://issues.apache.org/jira/browse/HADOOP-13641
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiao Chen
Priority: Minor


>From [~drankye]'s comment in HADOOP-13590:

Could we return earlier at the beginning so we can avoid at least 2 level of 
indents and make the whole block more readable?
{code}
  /**Spawn a thread to do periodic renewals of kerberos credentials*/
  private void spawnAutoRenewalThreadForUserCreds() {
if (isSecurityEnabled()) {
  //spawn thread only if we have kerb credentials
  if (user.getAuthenticationMethod() == AuthenticationMethod.KERBEROS &&
  !isKeytab) {
...
...
 very deep nested ...
...
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13548) Remove recursive dependencies of credential providers in LdapGroupsMapping

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513886#comment-15513886
 ] 

Hadoop QA commented on HADOOP-13548:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 15s{color} 
| {color:red} root generated 1 new + 709 unchanged - 0 fixed = 710 total (was 
709) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 65 unchanged - 0 fixed = 70 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829881/HADOOP-13548.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ffaa5d72347 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 537095d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10572/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10572/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10572/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10572/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --
>
> Key: H

[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513860#comment-15513860
 ] 

Tsuyoshi Ozawa commented on HADOOP-13397:
-

{quote}
a) I and I know others as well have some rather large licensing questions 
around Docker images. They effectively act as a binary distribution and it is 
very much against ASF rules to distribute GPL and other Category X components. 
It makes me extremely uncomfortable to move forward without some clarification 
from legal. (Yes, I know other ASF projects are publishing images on docker 
hub. Hopefully that means that there is a JIRA issue in the LEGAL project to 
point to.) This is a blocking issue that really needs to get clarified before 
further time investment.
{quote}

[~aw] [~dimaspivak] FYI, opened LEGAL-270.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix some warnings by findbugs in hadoop-maven-plugin

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513856#comment-15513856
 ] 

Hudson commented on HADOOP-13602:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10473 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10473/])
HADOOP-13602. Fix some warnings by findbugs in hadoop-maven-plugin. (ozawa: rev 
8d619b4896ac31f63fd0083594b6e7d207ef71a0)
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java


> Fix some warnings by findbugs in hadoop-maven-plugin
> 
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13638) KMS should set UGI's Configuration object properly

2016-09-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513843#comment-15513843
 ] 

Xiao Chen commented on HADOOP-13638:


Thanks for looking into the test Wei-Chiu. Looking forward to the fix.
This will also give audience more time to review this. :)

> KMS should set UGI's Configuration object properly
> --
>
> Key: HADOOP-13638
> URL: https://issues.apache.org/jira/browse/HADOOP-13638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13638.001.patch
>
>
> We found that the Configuration object in UGI in KMS server is not 
> initialized properly, therefore it does not load core-site.xml from 
> {{KMSConfiguration.KMS_CONFIG_DIR}}.
> This becomes a problem when the Hadoop cluster uses LdapGroupsMapping for 
> group resolution, because the UGI in KMS falls back to the default 
> JniBasedUnixGroupsMappingWithFallback (defined in core-default.xml) and is 
> thus not consistent with the Hadoop cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix some warnings by findbugs in hadoop-maven-plugin

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513788#comment-15513788
 ] 

Tsuyoshi Ozawa commented on HADOOP-13602:
-

Committed this to trunk and branch-2.  Opened HADOOP-13640 to track a remaining 
warning.

> Fix some warnings by findbugs in hadoop-maven-plugin
> 
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13640) Fix findbugs warning in VersionInfoMojo.java

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13640:
---

 Summary: Fix findbugs warning in VersionInfoMojo.java
 Key: HADOOP-13640
 URL: https://issues.apache.org/jira/browse/HADOOP-13640
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


Reported by Arpit on HADOOP-13602
{quote}
[INFO] 
org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo.getSvnUriInfo(String)
 uses String.indexOf(String) instead of String.indexOf(int) 
["org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo"] At 
VersionInfoMojo.java:[lines 49-341]
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13602) Fix some warnings by findbugs in hadoop-maven-plugin

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13602:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

> Fix some warnings by findbugs in hadoop-maven-plugin
> 
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13602) Fix some warnings by findbugs in hadoop-maven-plugin

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13602:

Summary: Fix some warnings by findbugs in hadoop-maven-plugin  (was: Fix 
findbugs warning in hadoop-maven-plugin)

> Fix some warnings by findbugs in hadoop-maven-plugin
> 
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13548) Remove recursive dependencies of credential providers in LdapGroupsMapping

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13548:
-
Attachment: HADOOP-13548.002.patch

> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --
>
> Key: HADOOP-13548
> URL: https://issues.apache.org/jira/browse/HADOOP-13548
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13548.001.patch, HADOOP-13548.002.patch
>
>
> HADOOP-11934 discovered an infinite loop of dependencies in the use of 
> credential provider in LdapGroupsMapping. It added a new localjceks:// URI to 
> workaround the problem. The assumption is that the groups mapping is used 
> only in NameNode and that using a local credential file is not a problem.
> However, there are cases where Hadoop clients, such as Sqoop, may use hdfs:// 
> based credential provider and use LdapGroupsMapping at the same time. We 
> should use HADOOP-12846 to exclude hdfs:// URI credential providers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13548) Remove recursive dependencies of credential providers in LdapGroupsMapping

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13548:
-
Status: Patch Available  (was: Open)

> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --
>
> Key: HADOOP-13548
> URL: https://issues.apache.org/jira/browse/HADOOP-13548
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13548.001.patch, HADOOP-13548.002.patch
>
>
> HADOOP-11934 discovered an infinite loop of dependencies in the use of 
> credential provider in LdapGroupsMapping. It added a new localjceks:// URI to 
> workaround the problem. The assumption is that the groups mapping is used 
> only in NameNode and that using a local credential file is not a problem.
> However, there are cases where Hadoop clients, such as Sqoop, may use hdfs:// 
> based credential provider and use LdapGroupsMapping at the same time. We 
> should use HADOOP-12846 to exclude hdfs:// URI credential providers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13548) Remove recursive dependencies of credential providers in LdapGroupsMapping

2016-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13548:
-
Attachment: (was: HADOOP-13548.002.patch)

> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --
>
> Key: HADOOP-13548
> URL: https://issues.apache.org/jira/browse/HADOOP-13548
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13548.001.patch
>
>
> HADOOP-11934 discovered an infinite loop of dependencies in the use of 
> credential provider in LdapGroupsMapping. It added a new localjceks:// URI to 
> workaround the problem. The assumption is that the groups mapping is used 
> only in NameNode and that using a local credential file is not a problem.
> However, there are cases where Hadoop clients, such as Sqoop, may use hdfs:// 
> based credential provider and use LdapGroupsMapping at the same time. We 
> should use HADOOP-12846 to exclude hdfs:// URI credential providers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13602) Fix findbugs warning in hadoop-maven-plugin

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513712#comment-15513712
 ] 

Tsuyoshi Ozawa commented on HADOOP-13602:
-

[~arpitagarwal] thanks for the review!  I Checking this in and creating new 
ticket for addressing the warning.

> Fix findbugs warning in hadoop-maven-plugin
> ---
>
> Key: HADOOP-13602
> URL: https://issues.apache.org/jira/browse/HADOOP-13602
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13602.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Attachment: 404_error_browser.png

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> HADOOP-13628.02.patch, HADOOP-13628.03.patch, HADOOP-13628.04.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513477#comment-15513477
 ] 

Allen Wittenauer commented on HADOOP-13560:
---

Squash your commits.  If you look at the .patch file generated 
(https://patch-diff.githubusercontent.com/raw/apache/hadoop/pull/125.patch) 
S3AIncrementalOutputStream.java is definitely there.

Before the question gets asked, Yetus can't use the .diff version (with all the 
merge bits resolved) because the .diff version doesn't include binary artifacts.

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-22 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513468#comment-15513468
 ] 

John Zhuge commented on HADOOP-7352:


[~ste...@apache.org] Do you have any comment on Patch 003?

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513464#comment-15513464
 ] 

Weiwei Yang commented on HADOOP-13628:
--

Hello [~ste...@apache.org]

# I have created HADOOP-13639 for the plain text stuff.
# For the 404 error, use following command

{code}
curl --header "Accept:application/json" 
http://deale1.fyre.ibm.com:8088/conf?name=xxx
{code}

you will get following output

{code}



Error 404 Property xxx not found

HTTP ERROR 404
Problem accessing /conf. Reason:
Property xxx not foundPowered by 
Jetty://

...


{code}

from browser, you'll get error like screen shot [^404_error_browser.png]

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> HADOOP-13628.02.patch, HADOOP-13628.03.patch, HADOOP-13628.04.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13639) Support plain text in ConfServlet http response

2016-09-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HADOOP-13639:


Assignee: Weiwei Yang

> Support plain text in ConfServlet http response
> ---
>
> Key: HADOOP-13639
> URL: https://issues.apache.org/jira/browse/HADOOP-13639
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Per discussion in HADOOP-13628, it would be good if ConfServlet also support 
> to return plain text http response. See more discussion 
> [here|https://issues.apache.org/jira/browse/HADOOP-13628?focusedCommentId=15507590&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15507590].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13639) Support plain text in ConfServlet http response

2016-09-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13639:
-
Description: Per discussion in HADOOP-13628, it would be good if 
ConfServlet also support to return plain text http response. See more 
discussion 
[here|https://issues.apache.org/jira/browse/HADOOP-13628?focusedCommentId=15507590&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15507590].
  (was: Per discussion in HADOOP-13628, it would be good if ConfServlet also 
support to return plain text http response.)

> Support plain text in ConfServlet http response
> ---
>
> Key: HADOOP-13639
> URL: https://issues.apache.org/jira/browse/HADOOP-13639
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>
> Per discussion in HADOOP-13628, it would be good if ConfServlet also support 
> to return plain text http response. See more discussion 
> [here|https://issues.apache.org/jira/browse/HADOOP-13628?focusedCommentId=15507590&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15507590].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13639) Support plain text in ConfServlet http response

2016-09-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created HADOOP-13639:


 Summary: Support plain text in ConfServlet http response
 Key: HADOOP-13639
 URL: https://issues.apache.org/jira/browse/HADOOP-13639
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.7.3
Reporter: Weiwei Yang


Per discussion in HADOOP-13628, it would be good if ConfServlet also support to 
return plain text http response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513233#comment-15513233
 ] 

Hadoop QA commented on HADOOP-13317:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829842/HADOOP-13317-3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d0aef0f93a49 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 537095d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10570/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10570/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-1

[jira] [Updated] (HADOOP-13317) Add logs to KMS servier-side to improve supportability

2016-09-22 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13317:
---
Attachment: HADOOP-13317-3.patch

Checkstyle fix

> Add logs to KMS servier-side to improve supportability
> --
>
> Key: HADOOP-13317
> URL: https://issues.apache.org/jira/browse/HADOOP-13317
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13317-1.patch, HADOOP-13317-2.patch, 
> HADOOP-13317-3.patch, HADOOP-13317.patch
>
>
> [KMS.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java]
>  is the main class that serves KMS http requests. There're currently no logs 
> at all, making trouble shooting difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513097#comment-15513097
 ] 

Hadoop QA commented on HADOOP-13614:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
32s{color} | {color:red} Docker failed to build yetus/hadoop:b59b8b7. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829827/HADOOP-13614-branch-2-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10569/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Attachment: HADOOP-13614-branch-2-002.patch

reattaching patch 002 so that it is the last file in the attachment list, then 
asking yetus to try it again

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Patch Available  (was: Open)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Open  (was: Patch Available)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513064#comment-15513064
 ] 

Steve Loughran commented on HADOOP-13628:
-

# go on, support text/plain too :)
# when there's a 404, what content type comes back? I ask as there's aways the 
risk that bad client code will fail badly trying to parse the HTML, without 
looking at the status code. As long as text/html comes back, we get to deny all 
responsibility

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch, HADOOP-13628.02.patch, 
> HADOOP-13628.03.patch, HADOOP-13628.04.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512985#comment-15512985
 ] 

Steve Loughran commented on HADOOP-13627:
-

# have a real {{serialVersionUID}}; your ID can do that
# I'd go for public/unstable/evolving as the audience & stability info. Client 
code will see this.
# maybe we should move the statics out of UGI into something like 
UGIErrorMessages, similar to {{FSExceptionMessages}}
# one thing that could be useful in code would be to list the principal/user as 
a structured field in the exception, as {{PathIOException}} does

otherwise, looks nice

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
> Attachments: HADOOP-13627.01.patch
>
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512907#comment-15512907
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 74 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 17m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 57 new + 2560 
unchanged - 27 fixed = 2617 total (was 2587) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  8m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 582 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
16s{color} | {color:red} The patch 4277 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
28s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 19m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-auth-examples in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green

[jira] [Commented] (HADOOP-13634) Some configuration in Aliyun doc has been outdated

2016-09-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512835#comment-15512835
 ] 

Kai Zheng commented on HADOOP-13634:


Yeah, right. Good catch, Steve!

> Some configuration in Aliyun doc has been outdated
> --
>
> Key: HADOOP-13634
> URL: https://issues.apache.org/jira/browse/HADOOP-13634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13634-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13622) `-atomic` should not be supported while using `distcp` command in object file system

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512833#comment-15512833
 ] 

Steve Loughran commented on HADOOP-13622:
-

no, no timetable.


What would be good would be some special section in 
{{hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md}} on 
"fs shell commands and object stores"

* why you can get not found exceptions on an {{-fs ls s3a://bucket }} URL if 
you leave off the trailing /  (it's because there's no homedir, and the shell 
assumes its a relative path)
* distcp atomic not working
* why you should use {{fs -put -d}} to upload files without the rename
* why you should use {{-skipTrash}} when deleting things
* why you shouldn't put secrets in the URLs, e.g.  s3n://awsId:secret@mybucket/

Looks also like the document doesn't even mention put's -d and -f options, so 
it could do with a quick comparison of what usage info gets printed in a 
{{hadoop fs}} message and what the docs say, updating the docs.

Would you be willing to do this?

> `-atomic` should not be supported while using `distcp` command in object file 
> system
> 
>
> Key: HADOOP-13622
> URL: https://issues.apache.org/jira/browse/HADOOP-13622
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> After discussing with [~ste...@apache.org] in HADOOP-13593, I get the point 
> that none of the object stores support atomic renames. So I file a new jira 
> and ready to provide a patch to disable `distcp -atomic`  in object file 
> system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13634) Some configuration in Aliyun doc has been outdated

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512791#comment-15512791
 ] 

Steve Loughran commented on HADOOP-13634:
-

changed title. If you can, can you include a bit of scope in the tiltle, as in 
the change logs issued with releases, that's all people say, and a patch 
updating out of date config details could apply to pretty much anywhere in the 
hadoop documentation tree —we don't want to raise hopes in readers

> Some configuration in Aliyun doc has been outdated
> --
>
> Key: HADOOP-13634
> URL: https://issues.apache.org/jira/browse/HADOOP-13634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13634-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Patch Available  (was: Open)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13634) Some configuration in Aliyun doc has been outdated

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13634:

Summary: Some configuration in Aliyun doc has been outdated  (was: Some 
configuration in doc has been outdated)

> Some configuration in Aliyun doc has been outdated
> --
>
> Key: HADOOP-13634
> URL: https://issues.apache.org/jira/browse/HADOOP-13634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13634-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512733#comment-15512733
 ] 

Steve Loughran commented on HADOOP-13560:
-

Javac failing
{code}
/testptch/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AIncrementalOutputStream.java:[245,6]
 error: 'else' without 'if'
{code}

Which is interesting because: The current patch and tip of PR doesn't have such 
a class. 

maybe [~aw] can tell me how I've got yetus confused. In the meantime I'll do a 
superficial change and update the PR to see what happens

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Open  (was: Patch Available)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13531) S3A output streams to share a single LocalDirAllocator for round-robin drive use

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512722#comment-15512722
 ] 

Steve Loughran commented on HADOOP-13531:
-

this is probably superceded by HADOOP-13560

> S3A output streams to share a single LocalDirAllocator for round-robin drive 
> use
> 
>
> Key: HADOOP-13531
> URL: https://issues.apache.org/jira/browse/HADOOP-13531
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13531-001.patch
>
>
> {{S3AOutputStream}} uses {{LocalDirAllocator}} to choose a directory from the 
> comma-separated list of buffers —but it creates a new instance for every 
> output stream. This misses a key point of the allocator: for it to do 
> round-robin allocation, it needs to remember the last disk written to. If a 
> new instance is used for every file: no history.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512656#comment-15512656
 ] 

Hadoop QA commented on HADOOP-13544:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 33s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
15s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-annotations in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {col

[jira] [Comment Edited] (HADOOP-13584) Merge HADOOP-12756 branch to latest trunk

2016-09-22 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512522#comment-15512522
 ] 

Genmao Yu edited comment on HADOOP-13584 at 9/22/16 7:58 AM:
-

Unit test result for patch-v4:

{code}
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun ---
[INFO] Deleting /develop/github/hadoop/hadoop-tools/hadoop-aliyun/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun 
---
[INFO] Compiling 8 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-aliyun ---
[INFO] Compiling 16 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun ---
[INFO] Surefire report directory: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.363 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.283 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.78 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.824 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.018 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.695 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 3.473 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.441 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.061 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.426 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.133 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyu

[jira] [Commented] (HADOOP-13584) Merge HADOOP-12756 branch to latest trunk

2016-09-22 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512522#comment-15512522
 ] 

Genmao Yu commented on HADOOP-13584:


Unit test result:

{code}
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun ---
[INFO] Deleting /develop/github/hadoop/hadoop-tools/hadoop-aliyun/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun 
---
[INFO] Compiling 8 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-aliyun ---
[INFO] Compiling 16 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun ---
[INFO] Surefire report directory: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.363 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.283 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.78 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.824 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.018 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.695 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 3.473 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.441 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.061 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.426 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.133 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.

[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512424#comment-15512424
 ] 

Kai Zheng commented on HADOOP-13590:


Looking at the codes closely:
1. Maybe you also want to show {{now}} and {{renewalFailures}} values in the 
warning log?
{code}
+LOG.warn("Exception encountered while running the renewal"
++ " command for {}.", getUserName(), ie);
+final long now = Time.now();
+nextRefresh =
+getNextRetryTime(tgt, now, metrics.renewalFailures);
+metrics.renewalFailures++;
{code}

2. Could be renamed and more specific: getNextTgtRenewalTime. And static. If 
you pass {{tgtEndTime}} instead of the {{tgt}}, it would make 
{{testGetNextRetryTime}} test much more simplified.
{code}
  long getNextRetryTime(final KerberosTicket tgt, final long currentTime,
  final long failureCount) {
LOG.debug("Tgt endtime is {}, failure count is {}.",
tgt.getEndTime().getTime(), failureCount);
final long lastRetryTime =
tgt.getEndTime().getTime() - kerberosMinSecondsBeforeRelogin;
return Math.min(lastRetryTime,
currentTime + kerberosMinSecondsBeforeRelogin * (1 << failureCount));
  }
{code}

3. A suggestion by the way, not introduced by this and not sure if it's good to 
do it here. Could we return earlier at the beginning so we can avoid at least 2 
level of indents and make the whole block more readable?
{code}
  /**Spawn a thread to do periodic renewals of kerberos credentials*/
  private void spawnAutoRenewalThreadForUserCreds() {
if (isSecurityEnabled()) {
  //spawn thread only if we have kerb credentials
  if (user.getAuthenticationMethod() == AuthenticationMethod.KERBEROS &&
  !isKeytab) {
...
...
 very deep nested ...
...
{code}

4. Just a question. Any other exception than {{IOException}} could be thrown 
there?

5. In the new test class {{TestUGIWithMiniKdc}}: I'm not sure if we need 
{{testUGI}} to doAs the call 
{{UserGroupInformation.loginUserFromSubject(loginSubject)}}.
{code}
+  loginContext.login();
+  final Subject loginSubject = loginContext.getSubject();
+  final UserGroupInformation testUGI =
+  UserGroupInformation.createUserForTesting("testing", new String[0]);
+  testUGI.doAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws IOException {
+  UserGroupInformation.loginUserFromSubject(loginSubject);
+  return null;
+}
+  });
{code}

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org