[jira] [Commented] (HADOOP-15197) Remove tomcat from the Hadoop-auth test bundle

2018-01-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348118#comment-16348118
 ] 

Xiao Chen commented on HADOOP-15197:


Thanks Kihwal for the review!

Attaching a patch 2 to fix checkstyle. Also added a few dummy lines to trigger 
kms / httpfs tests from precommit. Will commit by end of Thursday if all goes 
well

> Remove tomcat from the Hadoop-auth test bundle
> --
>
> Key: HADOOP-15197
> URL: https://issues.apache.org/jira/browse/HADOOP-15197
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15197.01.patch, HADOOP-15197.02.patch
>
>
> We have switched KMS and HttpFS from tomcat to jetty in 3.0. There appears to 
> have some left over tests in Hadoop-auth which were for used for KMS / HttpFS 
> coverage.
> We should cleanup the test accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15197) Remove tomcat from the Hadoop-auth test bundle

2018-01-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15197:
---
Attachment: HADOOP-15197.02.patch

> Remove tomcat from the Hadoop-auth test bundle
> --
>
> Key: HADOOP-15197
> URL: https://issues.apache.org/jira/browse/HADOOP-15197
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15197.01.patch, HADOOP-15197.02.patch
>
>
> We have switched KMS and HttpFS from tomcat to jetty in 3.0. There appears to 
> have some left over tests in Hadoop-auth which were for used for KMS / HttpFS 
> coverage.
> We should cleanup the test accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348050#comment-16348050
 ] 

Xiaoyu Yao edited comment on HADOOP-15204 at 2/1/18 5:54 AM:
-

[~anu] , you mean HADOOP-8608?


was (Author: xyao):
[~anu] , you mean -HADOOP-860+8+-?

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348050#comment-16348050
 ] 

Xiaoyu Yao edited comment on HADOOP-15204 at 2/1/18 5:53 AM:
-

[~anu] , you mean -HADOOP-860+8+-?


was (Author: xyao):
[~anu] , you mean HADOOP-8606?

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348050#comment-16348050
 ] 

Xiaoyu Yao commented on HADOOP-15204:
-

[~anu] , you mean HADOOP-8606?

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347954#comment-16347954
 ] 

genericqa commented on HADOOP-15204:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 241 unchanged - 0 fixed = 251 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908696/HADOOP-15204.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a8d53d326091 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bee384 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/testReport/ |
| Max. process+thread count | 1500 (

[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347890#comment-16347890
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~chris.douglas] As the author of HDFS-8608, I would appreciate any 
perspectives you have on this JIRA. 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347887#comment-16347887
 ] 

Steve Loughran commented on HADOOP-14556:
-

[~daryn]

any chance I could see your patch before I start trying to get mine to work 
properly?

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347884#comment-16347884
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~arpitagarwal] ,[~xyao] ,[~nandakumar131],[~elek],[~msingh] ,[~jnp] Please 
take a look when you get a chance. 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-15204:
--
Attachment: HADOOP-15204.001.patch

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-15204:
--
Status: Patch Available  (was: Open)

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)
Anu Engineer created HADOOP-15204:
-

 Summary: Add Configuration API for parsing storage sizes
 Key: HADOOP-15204
 URL: https://issues.apache.org/jira/browse/HADOOP-15204
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.1.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 3.1.0


Hadoop has a lot of configurations that specify memory and disk size. This JIRA 
proposes to add an API like {{Configuration.getStorageSize}} which will allow 
users
to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 and 
Ozone. Adding {{getTimeDuration}} support was a great improvement for ozone 
code base, this JIRA hopes to do the same thing for configs that deal with disk 
and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347857#comment-16347857
 ] 

genericqa commented on HADOOP-15168:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 7s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15168 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908687/HADOOP-15168.04.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 046ba434cbea 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a725bb |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14053/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14053/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347827#comment-16347827
 ] 

Bharat Viswanadham commented on HADOOP-15168:
-

Attached patch v04 to add kdiag command only to hadoop.

And also updated SecureMode.md, to reflect the same.

 

cc [~ste...@apache.org]

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15168:

Attachment: HADOOP-15168.04.patch

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15007:
---

Assignee: Ajay Kumar  (was: Anu Engineer)

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347623#comment-16347623
 ] 

Ajay Kumar commented on HADOOP-15202:
-

Orignal suggestion from [~xyao]. 
[HDFS-13060|https://issues.apache.org/jira/browse/HDFS-13060?focusedCommentId=16347397&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347397]

>  Deprecate CombinedIPWhiteList to use CombinedIPList 
> -
>
> Key: HADOOP-15202
> URL: https://issues.apache.org/jira/browse/HADOOP-15202
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
>  Deprecate CombinedIPWhiteList to use CombinedIPList. 
> Orignal suggestion from [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15202:

Description: 
 Deprecate CombinedIPWhiteList to use CombinedIPList. 
Orignal suggestion from [~xyao]

  was: Deprecate CombinedIPWhiteList to use CombinedIPList.


>  Deprecate CombinedIPWhiteList to use CombinedIPList 
> -
>
> Key: HADOOP-15202
> URL: https://issues.apache.org/jira/browse/HADOOP-15202
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
>  Deprecate CombinedIPWhiteList to use CombinedIPList. 
> Orignal suggestion from [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14353) add an SSE-KMS scale test to see if you can overload the keystore in random IO

2018-01-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14353.
-
Resolution: Won't Fix

not seen this happening in the field, doubt you can do it for a scale test. And 
if you could, it'd distrupt all other users of KMS in the same account

> add an SSE-KMS scale test to see if you can overload the keystore in random IO
> --
>
> Key: HADOOP-14353
> URL: https://issues.apache.org/jira/browse/HADOOP-14353
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Maybe add an optional IT test to aggressively seek on an SSE-KMS test file to 
> see if it can overload the KMS infra. The [default 
> limit|http://docs.aws.amazon.com/kms/latest/developerguide/limits.html] is 
> 600 requests/second. This may seem a lot, but with random IO, every new HTTPS 
> request in the chain is potentially triggering a new operation. 
> Someone should see what happens: how easy is it to create, and what is the 
> error message.
> This may not be something we can trigger in a simple IT test, just because 
> it's single host.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14620) S3A authentication failure for regions other than us-east-1

2018-01-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14620:

Affects Version/s: (was: 2.8.0)

> S3A authentication failure for regions other than us-east-1
> ---
>
> Key: HADOOP-14620
> URL: https://issues.apache.org/jira/browse/HADOOP-14620
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Ilya Fourmanov
>Priority: Minor
> Attachments: s3-403.txt
>
>
> hadoop fs s3a:// operations fail authentication for s3 buckets hosted in 
> regions other than default us-east-1
> Steps to reproduce:
> # create s3 bucket in eu-west-1
> # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run 
> following command:
> {code}
> hadoop --loglevel DEBUG  -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com  -ls  
> s3a://your-eu-west-1-hosted-bucket/ 
> {code}
> Expected behaviour:
> You will see listing of the bucket
> Actual behaviour:
> You will get 403 Authentication Denied response for AWS S3.
> Reason is mismatch in string to sign as defined in 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html 
> provided by hadoop and expected by AWS. 
> If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes 
> returned by AWS, you will see that AWS expects CanonicalizedResource to be in 
> form  
> /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/.
> Hadoop provides it as /your-eu-west-1-hosted-bucket/
> Note that AWS documentation doesn't explicitly state that endpoint or full 
> dns address should be appended to CanonicalizedResource however practice 
> shows it is actually required.
> I've also submitted this to AWS for them to correct behaviour or 
> documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar moved HDFS-13090 to HADOOP-15203:


Key: HADOOP-15203  (was: HDFS-13090)
Project: Hadoop Common  (was: Hadoop HDFS)

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14620) S3A authentication failure for regions other than us-east-1

2018-01-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14620:

Priority: Minor  (was: Major)

> S3A authentication failure for regions other than us-east-1
> ---
>
> Key: HADOOP-14620
> URL: https://issues.apache.org/jira/browse/HADOOP-14620
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ilya Fourmanov
>Priority: Minor
> Attachments: s3-403.txt
>
>
> hadoop fs s3a:// operations fail authentication for s3 buckets hosted in 
> regions other than default us-east-1
> Steps to reproduce:
> # create s3 bucket in eu-west-1
> # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run 
> following command:
> {code}
> hadoop --loglevel DEBUG  -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com  -ls  
> s3a://your-eu-west-1-hosted-bucket/ 
> {code}
> Expected behaviour:
> You will see listing of the bucket
> Actual behaviour:
> You will get 403 Authentication Denied response for AWS S3.
> Reason is mismatch in string to sign as defined in 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html 
> provided by hadoop and expected by AWS. 
> If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes 
> returned by AWS, you will see that AWS expects CanonicalizedResource to be in 
> form  
> /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/.
> Hadoop provides it as /your-eu-west-1-hosted-bucket/
> Note that AWS documentation doesn't explicitly state that endpoint or full 
> dns address should be appended to CanonicalizedResource however practice 
> shows it is actually required.
> I've also submitted this to AWS for them to correct behaviour or 
> documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15202:
---

 Summary:  Deprecate CombinedIPWhiteList to use CombinedIPList 
 Key: HADOOP-15202
 URL: https://issues.apache.org/jira/browse/HADOOP-15202
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Ajay Kumar
Assignee: Ajay Kumar


 Deprecate CombinedIPWhiteList to use CombinedIPList.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347606#comment-16347606
 ] 

genericqa commented on HADOOP-15168:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
55s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15168 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908636/HADOOP-15168.03.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux bb697dfb13a1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ce2190 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14052/testReport/ |
| Max. process+thread count | 318 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14052/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>





[jira] [Created] (HADOOP-15201) Automatically determine region & hence S3 endpoint of buckets

2018-01-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15201:
---

 Summary: Automatically determine region & hence S3 endpoint of 
buckets
 Key: HADOOP-15201
 URL: https://issues.apache.org/jira/browse/HADOOP-15201
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: Steve Loughran


Use {{getBucketLocation}} to determine location & map to endpoint, if 
fs.s3a.endpoint is set to "automatic"

S3guard added the API call {{String getBucketLocation()}}, which is used for 
DDB binding, We can also use this to determine the s3 endpoint, to avoid 
recurrent issues with auth failures related to it not being valid

Still need to handle: buckets on third-party servers, and the inevitability 
that new AWS regions will be added after the Hadoop version & AWS Jar is 
shipped and frozen
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347595#comment-16347595
 ] 

genericqa commented on HADOOP-15168:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
59s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
3s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15168 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908636/HADOOP-15168.03.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 16d8d0c0c011 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ce2190 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14051/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14051/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>





[jira] [Issue Comment Deleted] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2018-01-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14831:

Comment: was deleted

(was: Use {{getBucketLocation}} to determine location & map to endpoint, if 
fs.s3a.endpoint is unset (or maybe, "automatic"?)

for S3guard, we added the API call {{String getBucketLocation()}}, which is 
used for DDB binding, We can also use this to determine the s3 endpoint, to 
avoid recurrent issues with auth failures related to it not being setup)

> Über-jira: S3a phase IV: Hadoop 3.1 features
> 
>
> Key: HADOOP-14831
> URL: https://issues.apache.org/jira/browse/HADOOP-14831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347587#comment-16347587
 ] 

Ajay Kumar edited comment on HADOOP-15007 at 1/31/18 8:55 PM:
--

[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
applicable even if change tags to String instead if enum.
Below is response to some of the other questions raised:
{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and I'm assuming "no" unless it can be 
demonstrated.{quote}
Will add test for this.
{quote}Can I add tags to a property retrieved with getPassword()?{quote}
Only if If getPassword falls back to config. Current change is not applocable 
to {{CredentialProviderFactory}}. So 
{{HADOOP_SECURITY_CREDENTIAL_PROVIDER_PATH}} can be tagged but not the 
properties retrieved from it.
{quote}If I overrride a -default property in a file, does it inherit the tags 
from the parent?{quote}
Yes
{quote}If I add tags to an overidden property, do the tags override or replace 
existing ones?{quote}
New tag will not replace the old one but will be an addition to it. It will 
result in same property being returned for two different tags by 
{{getAllPropertiesByTag}}. Ex SECURITY, CLIENT
{quote}Configuration.readTagFromConfig. If there's an invalid tag in a -default 
file, is it going to fill the log files with info messages on every load of 
every single configuration file? If so, it's too noisy. One warning per 
(file/property) the way we do for deprecated tags.{quote}
Will fix this.
{quote}Configuration.getPropertyTag returns a PropertyTag enum; which is marked 
as Private/Evolving. But configuration is Public/Stable. Either we need to mark 
PropertyTag as Public/ {Evolving/Unstable}
or getPropertyTag is marked as Private.{quote}
Open to  both suggestions on this one.
{quote}Could there have been a way to allow PropertyTag enums to be registered 
dynamically/via classname strings, so that they can be kept in the specific 
modules. We've now tainted hadoop-common with details about yarn, hdfs, 
ozone.{quote}
I see your point but how we will register a class which is not in classpath of 
common? Another option is to create a common class for property tags and use it 
for everything. i.e hdfs,yarn etc.



was (Author: ajayydv):
[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
i think will be applicable even if change tags to String instead if enum.

{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and

[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347587#comment-16347587
 ] 

Ajay Kumar commented on HADOOP-15007:
-

[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
i think will be applicable even if change tags to String instead if enum.

{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and I'm assuming "no" unless it can be 
demonstrated.{quote}
Will add test for this.
{quote}Can I add tags to a property retrieved with getPassword()?{quote}
Only if If getPassword falls back to config. Current change is not applocable 
to {{CredentialProviderFactory}}. So 
{{HADOOP_SECURITY_CREDENTIAL_PROVIDER_PATH}} can be tagged but not the 
properties retrieved from it.
{quote}If I overrride a -default property in a file, does it inherit the tags 
from the parent?{quote}
Yes
{quote}If I add tags to an overidden property, do the tags override or replace 
existing ones?{quote}
New tag will not replace the old one but will be an addition to it. It will 
result in same property being returned for two different tags by 
{{getAllPropertiesByTag}}. Ex SECURITY, CLIENT
{quote}Configuration.readTagFromConfig. If there's an invalid tag in a -default 
file, is it going to fill the log files with info messages on every load of 
every single configuration file? If so, it's too noisy. One warning per 
(file/property) the way we do for deprecated tags.{quote}
Will fix this.
{quote}Configuration.getPropertyTag returns a PropertyTag enum; which is marked 
as Private/Evolving. But configuration is Public/Stable. Either we need to mark 
PropertyTag as Public/ {Evolving/Unstable}
or getPropertyTag is marked as Private.{quote}
Open to  both suggestions on this one.
{quote}Could there have been a way to allow PropertyTag enums to be registered 
dynamically/via classname strings, so that they can be kept in the specific 
modules. We've now tainted hadoop-common with details about yarn, hdfs, 
ozone.{quote}
I see your point but how we will register a class which is not in classpath of 
common? Another option is to create a common class for property tags and use it 
for everything. i.e hdfs,yarn etc.


> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-01-31 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15200:

Target Version/s: 3.1.0, 3.0.1

> Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0
> --
>
> Key: HADOOP-15200
> URL: https://issues.apache.org/jira/browse/HADOOP-15200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Priority: Critical
>
> Post HADOOP-14267, the constructor for DistCpOptions was removed and will 
> break any project using it for java based implementation/usage of DistCp. 
> This JIRA would track next steps required to reconcile/fix this 
> incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14267) Make DistCpOptions class immutable

2018-01-31 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347542#comment-16347542
 ] 

Kuhu Shukla commented on HADOOP-14267:
--

Linking a new JIRA to track further discussion.

> Make DistCpOptions class immutable
> --
>
> Key: HADOOP-14267
> URL: https://issues.apache.org/jira/browse/HADOOP-14267
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> HDFS-10533.004.patch, HDFS-10533.005.patch, HDFS-10533.006.patch, 
> HDFS-10533.007.patch, HDFS-10533.008.patch, HDFS-10533.009.patch, 
> HDFS-10533.010.patch, HDFS-10533.011.patch, HDFS-10533.012.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-01-31 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created HADOOP-15200:


 Summary: Missing DistCpOptions constructor breaks downstream 
DistCp projects in 3.0
 Key: HADOOP-15200
 URL: https://issues.apache.org/jira/browse/HADOOP-15200
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 3.0.0
Reporter: Kuhu Shukla


Post HADOOP-14267, the constructor for DistCpOptions was removed and will break 
any project using it for java based implementation/usage of DistCp. This JIRA 
would track next steps required to reconcile/fix this incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347532#comment-16347532
 ] 

Steve Loughran commented on HADOOP-14831:
-

Use{{getBucketLocation}} to determine location & map to endpoint, if 
fs.s3a.endpoint is unset (or maybe, "automatic"?)

for S3guard, we added the API call {{String getBucketLocation()}}, which is 
used for DDB binding, We can also use this to determine the s3 endpoint, to 
avoid recurrent issues with auth failures related to it not being setup

> Über-jira: S3a phase IV: Hadoop 3.1 features
> 
>
> Key: HADOOP-14831
> URL: https://issues.apache.org/jira/browse/HADOOP-14831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347532#comment-16347532
 ] 

Steve Loughran edited comment on HADOOP-14831 at 1/31/18 8:10 PM:
--

Use {{getBucketLocation}} to determine location & map to endpoint, if 
fs.s3a.endpoint is unset (or maybe, "automatic"?)

for S3guard, we added the API call {{String getBucketLocation()}}, which is 
used for DDB binding, We can also use this to determine the s3 endpoint, to 
avoid recurrent issues with auth failures related to it not being setup


was (Author: ste...@apache.org):
Use{{getBucketLocation}} to determine location & map to endpoint, if 
fs.s3a.endpoint is unset (or maybe, "automatic"?)

for S3guard, we added the API call {{String getBucketLocation()}}, which is 
used for DDB binding, We can also use this to determine the s3 endpoint, to 
avoid recurrent issues with auth failures related to it not being setup

> Über-jira: S3a phase IV: Hadoop 3.1 features
> 
>
> Key: HADOOP-14831
> URL: https://issues.apache.org/jira/browse/HADOOP-14831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-15168:

Comment: was deleted

(was: Uploaded patch v00 as v04. Will commit it shortly. )

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-15168:

Attachment: (was: HADOOP-15168.04.patch)

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15199) hadoop_verify_confdir prevents previously valid log4j config file names

2018-01-31 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347512#comment-16347512
 ] 

Sean Mackrory commented on HADOOP-15199:


CC'ing [~aw], in case you have any input.

> hadoop_verify_confdir prevents previously valid log4j config file names
> ---
>
> Key: HADOOP-15199
> URL: https://issues.apache.org/jira/browse/HADOOP-15199
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Priority: Major
>
> When starting a daemon, the shell scripts check that there's a 
> log4j.properties file and logs an error if there isn't one. But there appear 
> to be several instances of files named with a prefix or suffix (for example - 
> I found this starting up HttpFS with httpfs-log4j.properties in a 
> Bigtop-style deployment). We should probably loosen the check a little, 
> something like this
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
> b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
> index 2dc1dc8..df82bd2 100755
> --- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
> +++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
> @@ -651,7 +651,7 @@ function hadoop_verify_confdir
>  {
>    # Check only log4j.properties by default.
>    # --loglevel does not work without logger settings in 
> log4j.log4j.properties.
> -  if [[ ! -f "${HADOOP_CONF_DIR}/log4j.properties" ]]; then
> +  if [[ ! -f "${HADOOP_CONF_DIR}/*log4j*.properties" ]]; then
>  hadoop_error "WARNING: log4j.properties is not found. HADOOP_CONF_DIR 
> may be incomplete."
>    fi
>  }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15199) hadoop_verify_confdir prevents previously valid log4j config file names

2018-01-31 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15199:
--

 Summary: hadoop_verify_confdir prevents previously valid log4j 
config file names
 Key: HADOOP-15199
 URL: https://issues.apache.org/jira/browse/HADOOP-15199
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory


When starting a daemon, the shell scripts check that there's a log4j.properties 
file and logs an error if there isn't one. But there appear to be several 
instances of files named with a prefix or suffix (for example - I found this 
starting up HttpFS with httpfs-log4j.properties in a Bigtop-style deployment). 
We should probably loosen the check a little, something like this
{code:java}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index 2dc1dc8..df82bd2 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -651,7 +651,7 @@ function hadoop_verify_confdir
 {
   # Check only log4j.properties by default.
   # --loglevel does not work without logger settings in log4j.log4j.properties.
-  if [[ ! -f "${HADOOP_CONF_DIR}/log4j.properties" ]]; then
+  if [[ ! -f "${HADOOP_CONF_DIR}/*log4j*.properties" ]]; then
 hadoop_error "WARNING: log4j.properties is not found. HADOOP_CONF_DIR may 
be incomplete."
   fi
 }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347498#comment-16347498
 ] 

Hanisha Koneru commented on HADOOP-15168:
-

Uploaded patch v00 as v04. Will commit it shortly. 

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-15168:

Attachment: HADOOP-15168.04.patch

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347494#comment-16347494
 ] 

Hanisha Koneru commented on HADOOP-15168:
-

Thanks [~bharatviswa]. 
I had an offline discussion with [~arpitagarwal]. We do not need to add kdiag 
to hdfs and yarn. It is sufficient to add it to hadoop cli.

Patch v00 is good. I am sorry about the extra revisions, Bharat. 

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347430#comment-16347430
 ] 

Bharat Viswanadham commented on HADOOP-15168:
-

Fixed checkstyle issues and shell check issues in patch v03.

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command

2018-01-31 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15168:

Attachment: HADOOP-15168.03.patch

> Add kdiag tool to hadoop command
> 
>
> Key: HADOOP-15168
> URL: https://issues.apache.org/jira/browse/HADOOP-15168
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, 
> HADOOP-15168.02.patch, HADOOP-15168.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347343#comment-16347343
 ] 

Steve Loughran commented on HADOOP-15124:
-

Not forgotten about this, it's on my mental list of "reviews I need to sit down 
and catch up with"

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347305#comment-16347305
 ] 

Anu Engineer commented on HADOOP-15007:
---

In my mind, Enums are acting as a set of constants that is easy to use in code, 
that is once the const string is read, it becomes something better than just a 
string, since it is kind of type. 

But I see the issue that parser is logging an error and it *should not* happen. 
I am fine with removing the Enums if that is the only way, but if we are able 
to address the logging issue without that, I would prefer that.

cc: [~ajayydv]  

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347284#comment-16347284
 ] 

Steve Loughran commented on HADOOP-15007:
-

The problem I'm having with enums is the stack trace. If a property config is 
loaded in Hadoop 3.1 which contains a ref to a tag added in Hadoop 3.2, you'll 
get the stack trace above.

You aren't getting type safety, because the references in XML are just strings. 
What you can do in code is have a list of predefined const strings, so at least 
in our code & tests it can be referred to without typo risk.

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) remove the Local Dynamo DB test option

2018-01-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347263#comment-16347263
 ] 

Steve Loughran commented on HADOOP-14918:
-

OK, that's turning out to be a major bit of work; we need to have a test 
against the real store which doesn't damage the data on any DDB you are 
actually using for s3guard. I'm going to propose that you need to 
declare/configure the name of a test table for this, one which the test will 
try to create on demand and destroy after

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347219#comment-16347219
 ] 

Anu Engineer commented on HADOOP-15007:
---

[~elek] , I personally favor type safety over duck typing. That said, we should 
be able to address [~ste...@apache.org] concerns without breaking type safety. 
Can you please elucidate why you think Enums should be replaced? I am trying to 
understand the trade-offs.

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14267) Make DistCpOptions class immutable

2018-01-31 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347210#comment-16347210
 ] 

Kuhu Shukla commented on HADOOP-14267:
--

[~liuml07],
bq. For the existing problems for downstream in using the DistCpOptions as Jing 
proposed, I'll create another jira for tracking this. That should go to 
branch-2 and be backwards-compatible.
Do we have any JIRAs or road map to track this since having a somewhat 
compatible change will save downstream projects (which can be many more than 
Oozie or Falcon) from breaking? Appreciate any and all comments on this.

> Make DistCpOptions class immutable
> --
>
> Key: HADOOP-14267
> URL: https://issues.apache.org/jira/browse/HADOOP-14267
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> HDFS-10533.004.patch, HDFS-10533.005.patch, HDFS-10533.006.patch, 
> HDFS-10533.007.patch, HDFS-10533.008.patch, HDFS-10533.009.patch, 
> HDFS-10533.010.patch, HDFS-10533.011.patch, HDFS-10533.012.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347207#comment-16347207
 ] 

Jason Lowe commented on HADOOP-15170:
-

Thanks for updating the patch!

I tested this out on a manually created a tarball with some symlinks and the 
link targets are being mishandled.  For example:
{noformat}
$ mkdir testdir
$ cd testdir
$ ln -s a b
$ ln -s /tmp/foo c
$ ls -l
total 0
lrwxrwxrwx. 1 nobody nobody 1 Jan 31 10:40 b -> a
lrwxrwxrwx. 1 nobody nobody 8 Jan 31 10:40 c -> /tmp/foo
$ cd ..
$ tar zcf testdir.tgz testdir 
{noformat}
When I unpack this tarball to a destination directory of "output" with 
unTarUsingJava the symlinks are all relative to the top-level output directory 
which is incorrect:
{noformat}
$ ls -l output/testdir
total 0
lrwxrwxrwx. 1 nobody nobody  8 Jan 31 10:41 b -> output/a
lrwxrwxrwx. 1 nobody nobody 14 Jan 31 10:41 c -> output/tmp/foo
{noformat}

The fix is to just take the symlink name as-is rather than trying to make it 
relative to something else.

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15016) Cost-Based RPC FairCallQueue with Reservation support

2018-01-31 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347132#comment-16347132
 ] 

Wei Yan commented on HADOOP-15016:
--

Sorry, [~xyao], I missed your previous comment...
{quote}bq.  1. This can be a useful feature for multi-tenancy Hadoop cluster. 
The cost estimates for different RPC calls can be difficult. Instead of 
hardcode fixed value per RPC, I would suggest making it a pluggable interface 
so that we can customize it for different deployments.
{quote}
Agree. This cost calculation will be pluggable.
{quote}bq. 2. The reserved share of call queue looks good. It is similar what 
we proposed in HADOOP-13128. What do we plan to handle the case when the 
reserved queue is full? blocking or backoff?
{quote}
Currently I'm thinking about backoff, the same behavior like how existing 
queues handle full.
{quote}bq. 3. The feature might need many manual configurations and tune to 
work for specific deployment and workloads. Do you want to add a section to 
discuss configurations, CLI tools, etc. to make this easier to use?
{quote}
Yes. I'm looking for a mathmatical model to calculate cost for different RPC 
calls, based on historical access pattern. This could be a suggestion for users 
to use. Also, may need to build a similar simulation tool, to replay the 
historical RPC log to verify different configurations.
{quote}bq. 4. It would be great if you could share some of the results achieved 
with the POC patch (e.g., RPC/second, average locking, process and queue time 
with/wo the patch).
{quote}
Is busy with some other projects. Will put some results around next month.

> Cost-Based RPC FairCallQueue with Reservation support
> -
>
> Key: HADOOP-15016
> URL: https://issues.apache.org/jira/browse/HADOOP-15016
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Major
> Attachments: Adding reservation support to NameNode RPC resource.pdf, 
> Adding reservation support to NameNode RPC resource_v2.pdf, 
> HADOOP-15016_poc.patch
>
>
> FairCallQueue is introduced to provide RPC resource fairness among different 
> users. In current implementation, each user is weighted equally, and the 
> processing priority for different RPC calls are based on how many requests 
> that user sent before. This works well when the cluster is shared among 
> several end-users.
> However, this has some limitations when a cluster is shared among both 
> end-users and some service jobs, like some ETL jobs which run under a service 
> account and need to issue lots of RPC calls. When NameNode becomes quite 
> busy, this set of jobs can be easily backoffed and low-prioritied. We cannot 
> simply treat this type jobs as "bad" user who randomly issues too many calls, 
> as their calls are normal calls. Also, it is unfair to weight a end-user and 
> a heavy service user equally when allocating RPC resources.
> One idea here is to introduce reservation support to RPC resources. That is, 
> for some services, we reserve some RPC resources for their calls. This idea 
> is very similar to how YARN manages CPU/memory resources among different 
> resource queues. A little more details here: Along with existing 
> FairCallQueue setup (like using 4 queues with different priorities), we would 
> add some additional special queues, one for each special service user. For 
> each special service user, we provide a guarantee RPC share (like 10% which 
> can be aligned with its YARN resource share), and this percentage can be 
> converted to a weight used in WeightedRoundRobinMultiplexer. A quick example, 
> we have 4 default queues with default weights (8, 4, 2, 1), and two special 
> service users (user1 with 10% share, and user2 with 15% share). So finally 
> we'll have 6 queues, 4 default queues (with weights 8, 4, 2, 1) and 2 special 
> queues (user1Queue weighted 15*10%/75%=2, and user2Queue weighted 
> 15*15%/75%=3).
> For new coming RPC calls from special service users, they will be put 
> directly to the corresponding reserved queue; for other calls, just follow 
> current implementation.
> By default, there is no special user and all RPC requests follow existing 
> FairCallQueue implementation.
> Would like to hear more comments on this approach; also want to know any 
> other better solutions? Will put a detailed design once get some early 
> comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347064#comment-16347064
 ] 

Elek, Marton commented on HADOOP-15007:
---

As the tags are used only to classify configuration for end user and no strict 
programmatic meaning, I propose the following simplification:

 * I think the PropertyTag enum could be removed (with all of the children)

 * I would use just simple strings as tags

 * We will loose type safety, but it's not a big deal as the tags are not used 
in the code. We don't need constants.

 * With simple string based tags we don't need to care about the dependency and 
compatibility problems. They would be solved.

 * To avoid typo bugs: I would extend the TestConfigurationFieldsBase to check 
the available tags and throw an error if a tag is used less than 3 times.

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15198) Correct the spelling in CopyFilter.java

2018-01-31 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HADOOP-15198:
--

 Summary: Correct the spelling in CopyFilter.java
 Key: HADOOP-15198
 URL: https://issues.apache.org/jira/browse/HADOOP-15198
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


configuration is misspelled as "configuratoin" in the javadoc for 
CopyFilter.java

{code}
  /**
   * Public factory method which returns the appropriate implementation of
   * CopyFilter.
   *
   * @param conf DistCp configuratoin
   * @return An instance of the appropriate CopyFilter
   */
  public static CopyFilter getCopyFilter(Configuration conf) {
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15197) Remove tomcat from the Hadoop-auth test bundle

2018-01-31 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346917#comment-16346917
 ] 

Kihwal Lee commented on HADOOP-15197:
-

+1 lgtm

> Remove tomcat from the Hadoop-auth test bundle
> --
>
> Key: HADOOP-15197
> URL: https://issues.apache.org/jira/browse/HADOOP-15197
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15197.01.patch
>
>
> We have switched KMS and HttpFS from tomcat to jetty in 3.0. There appears to 
> have some left over tests in Hadoop-auth which were for used for KMS / HttpFS 
> coverage.
> We should cleanup the test accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org