[jira] [Comment Edited] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException

2018-03-27 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416911#comment-16416911
 ] 

Gabor Bota edited comment on HADOOP-14758 at 3/28/18 6:46 AM:
--

Hi [~ste...@apache.org],

Could you tell what's the connection between this issue and HADOOP-14420?

Thanks,
Gabor


was (Author: gabor.bota):
Hi [~ste...@apache.org],

Could you tell the connection between this issue and HADOOP-14420?

Thanks,
Gabor

> S3GuardTool.prune to handle UnsupportedOperationException
> -
>
> Key: HADOOP-14758
> URL: https://issues.apache.org/jira/browse/HADOOP-14758
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-14758.001.patch
>
>
> {{MetadataStore.prune()}} may throw {{UnsupportedOperationException}} if not 
> supported.
> {{S3GuardTool.prune}} should recognise this, catch it and treat it 
> differently from any other failure, e.g. inform and return 0 as its a no-op



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException

2018-03-27 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416911#comment-16416911
 ] 

Gabor Bota commented on HADOOP-14758:
-

Hi [~ste...@apache.org],

Could you tell the connection between this issue and HADOOP-14420?

Thanks,
Gabor

> S3GuardTool.prune to handle UnsupportedOperationException
> -
>
> Key: HADOOP-14758
> URL: https://issues.apache.org/jira/browse/HADOOP-14758
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-14758.001.patch
>
>
> {{MetadataStore.prune()}} may throw {{UnsupportedOperationException}} if not 
> supported.
> {{S3GuardTool.prune}} should recognise this, catch it and treat it 
> differently from any other failure, e.g. inform and return 0 as its a no-op



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15317:
---
Attachment: HADOOP-15317.05.patch

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch, HADOOP-15317.05.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416876#comment-16416876
 ] 

Xiao Chen commented on HADOOP-15317:


Thanks for the review Eddy.

[^HADOOP-15317.05.patch] addressed the comments. As synced offline, the reason 
for {{lastValidNode}} is only because we're not sure what exactly caused the 
initial bug (infinite loop). Theoretically it should not happen, but this will 
give us some insights if it does happen Changed the log to error to make it 
easier to be found.

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch, HADOOP-15317.05.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15317:
---
Attachment: (was: HADOOP-15317.05.patch)

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15317:
---
Attachment: HADOOP-15317.05.patch

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch, HADOOP-15317.05.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes

2018-03-27 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416844#comment-16416844
 ] 

He Xiaoqiao commented on HADOOP-15345:
--

Thanks [~ajayydv] for your comments.
ping [~elgoiri],[~shahrs87], any suggestions for this issue?

> Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient 
> adding/getting/removing nodes
> ---
>
> Key: HADOOP-15345
> URL: https://issues.apache.org/jira/browse/HADOOP-15345
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.6
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15345-branch-2.7.001.patch
>
>
> As per discussion in HADOOP-15343 backport HADOOP-12185 to branch-2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416597#comment-16416597
 ] 

genericqa commented on HADOOP-15317:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 42m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 38s{color} | {color:orange} root: The patch generated 1 new + 53 unchanged - 
0 fixed = 54 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 29s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}237m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskChecker |
|   | hadoop.conf.TestCommonConfigurationFields |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916480/HADOOP-15317.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d44a52a98725 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personal

[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416578#comment-16416578
 ] 

genericqa commented on HADOOP-15320:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-tools: The patch generated 0 new + 25 
unchanged - 24 fixed = 25 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916504/HADOOP-15320.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 04be5bf449b9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2a2ef15 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14400

[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416516#comment-16416516
 ] 

Chris Douglas commented on HADOOP-15320:


Fixed checkstyle warnings. +1 from me. [~ste...@apache.org], lgty?

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.01.patch, HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15320:
---
Attachment: HADOOP-15320.01.patch

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.01.patch, HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416497#comment-16416497
 ] 

genericqa commented on HADOOP-15320:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-tools: The patch generated 2 new + 25 
unchanged - 24 fixed = 27 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914953/HADOOP-15320.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux baac5fbe684c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3fe41c6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14399/ar

[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416436#comment-16416436
 ] 

Lei (Eddy) Xu commented on HADOOP-15317:


Thanks for working on this, [~xiaochen]

A few comments:

{code}
int nthValidToReturn = r.nextInt(availableNodes);
594 LOG.debug("nthValidToReturn is {}", nthValidToReturn);
595 if (nthValidToReturn < 0) {
596   return null;
597 }
{code}

{{nthValidToReturn}} wont be negative , we dont need to check here.

{code}
assert numInScopeNodes >= availableNodes && availableNodes > 0;
{code}

* Can you use Preconditions with error messages? assert statement might be 
optimized out in prod.

* Also, I'd suggest not to put {{Very likely a bug}} in LOG. 

* Can you add some prove that when will this {{if (ret == null && lastValidNode 
!= null) {}} happen?

{code}
for (int i = 0; i < numInScopeNodes; ++i) {
   ret = parentNode.getLeaf(i, excludedScopeNode);
{code}

Does the above mean that it always starts from {{i=0}}?

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416424#comment-16416424
 ] 

Chris Douglas commented on HADOOP-15320:


Thanks [~shanyu]. Running this through Jenkins.

We could add a unit test to signal that a change to the default behavior could 
affect these FS implementations, but that should be implied.

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15320:
---
Status: Patch Available  (was: Open)

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 3.0.0, 2.9.0, 2.7.3
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes

2018-03-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416347#comment-16416347
 ] 

Ajay Kumar commented on HADOOP-15345:
-

+1

> Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient 
> adding/getting/removing nodes
> ---
>
> Key: HADOOP-15345
> URL: https://issues.apache.org/jira/browse/HADOOP-15345
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.6
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15345-branch-2.7.001.patch
>
>
> As per discussion in HADOOP-15343 backport HADOOP-12185 to branch-2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416346#comment-16416346
 ] 

Sean Mackrory commented on HADOOP-15342:


Also ran the tests against Central US and can confirm no dependency changes. 
The diff between 2.2.5 and 2.2.7 looks fairly straightforward on Github. +1. 
I'll wait to commit in case [~ste...@apache.org] has any further input.

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Attachments: HADOOP-15342.001.patch
>
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416333#comment-16416333
 ] 

Xiao Chen commented on HADOOP-15317:


Thanks for looking at the test failures, definitely related.

[^HADOOP-15317.04.patch] to improve the first random choose logic - we need 
this random to be from all nodes, not available nodes, so it does not have 
correlation with the nthValidToReturn. (i.e. getLeaf(nthValidToReturn) won't 
provide even distribution).


 Also tuned the test logging / assertion message, in a way that it's obvious 
which node failed the assertion. It should be straightforward to search in the 
logs and find out what the exclude nodes and results are.

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15317:
---
Attachment: HADOOP-15317.04.patch

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-03-27 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416290#comment-16416290
 ] 

Sean Mackrory commented on HADOOP-14759:


So you should also be able to prune an entire table by specifying "bin/hadoop 
s3guard prune -hours 1 -meta dynamodb://table_name" but that yields the 
following exception, so let's default to / if no prefix is specified at all:
{code:java}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
    at java.util.ArrayList.rangeCheck(ArrayList.java:653)
    at java.util.ArrayList.get(ArrayList.java:429)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:970)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1480)
    at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1489){code}

I do like that if you specify directories, this appears to handle the URL (and 
not just the bucket / hostname) correctly by using the existing translate 
function, such that I can prune metadata only for certain subdirectories. This 
is nice, and I feared it would be more difficult than just clearing out on a 
per-bucket basis.

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-27 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416283#comment-16416283
 ] 

shanyu zhao commented on HADOOP-15320:
--

I also run the following manual tests successfully:

1) Hive TPCH test with my change on WASB and it passed with correct number of 
splits.

2) Spark application to convert a huge CSV file to parquet.

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416228#comment-16416228
 ] 

Hudson commented on HADOOP-15312:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13887 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13887/])
HADOOP-15312. Undocumented KeyProvider configuration keys. Contributed 
(weichiu: rev 3fe41c65d84843f817a4f9ef8999dbf862db6674)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.2, 3.1.1
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416211#comment-16416211
 ] 

genericqa commented on HADOOP-15342:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-tools_hadoop-azure-datalake generated 1 new + 5 unchanged 
- 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15342 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916452/HADOOP-15342.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux f0d11d3a2470 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 285bbaa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14397/artifact/out/diff-compile-javac-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14397/testReport/ |
| Max. process+thread count | 309 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14397/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira

[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15312:
-
Fix Version/s: 2.10.0

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.2, 3.1.1
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15312:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to multiple branches. Probably more than necessary. Thanks [~GeLiXin]!

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.2, 3.1.1
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15312:
-
Fix Version/s: 3.1.1
   3.0.2

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.2.0, 3.0.2, 3.1.1
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15312:
-
Fix Version/s: 3.2.0

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15311:
---
Fix Version/s: 3.1.1

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.0.2, 3.1.1
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416188#comment-16416188
 ] 

Wei-Chiu Chuang commented on HADOOP-15312:
--

+1 LGTM will commit soon.

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2018-03-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14742:
---
Fix Version/s: 3.1.1

> Document multi-URI replication Inode for ViewFS
> ---
>
> Key: HADOOP-14742
> URL: https://issues.apache.org/jira/browse/HADOOP-14742
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, viewfs
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.0.2, 3.1.1
>
> Attachments: HADOOP-14742.001.patch, HADOOP-14742.002.patch
>
>
> HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
> semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15295) Remove redundant logging related to tags from Configuration

2018-03-27 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416151#comment-16416151
 ] 

Ajay Kumar commented on HADOOP-15295:
-

[~xyao],[~anu] thanks for review and commit.

> Remove redundant logging related to tags from Configuration
> ---
>
> Key: HADOOP-15295
> URL: https://issues.apache.org/jira/browse/HADOOP-15295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15295.000.patch, HADOOP-15295.001.patch, 
> HADOOP-15295.002.patch
>
>
> Remove redundant logging related to tags from Configuration.
> {code}
> 2018-03-06 18:55:46,164 INFO conf.Configuration: Removed undeclared tags:
> 2018-03-06 18:55:46,237 INFO conf.Configuration: Removed undeclared tags:
> 2018-03-06 18:55:46,249 INFO conf.Configuration: Removed undeclared tags:
> 2018-03-06 18:55:46,256 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416150#comment-16416150
 ] 

genericqa commented on HADOOP-14758:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
32s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-14758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916441/HADOOP-14758.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4b342c306d55 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 285bbaa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14396/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14396/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3GuardTool.prune to handle UnsupportedO

[jira] [Commented] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416146#comment-16416146
 ] 

Atul Sikaria commented on HADOOP-15342:
---

[~ste...@apache.org], thanks for looking. Patch is attached. 

Tests were run against East US 2.

No change in dependencies.

 

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Attachments: HADOOP-15342.001.patch
>
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-15342:
--
Status: Patch Available  (was: Open)

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Attachments: HADOOP-15342.001.patch
>
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-15342:
--
Attachment: HADOOP-15342.001.patch

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Attachments: HADOOP-15342.001.patch
>
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15348) S3A Input Stream bytes read counter isn't getting through to StorageStatistics/insturmentation properly

2018-03-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15348:
---

 Summary: S3A Input Stream bytes read counter isn't getting through 
to StorageStatistics/insturmentation properly
 Key: HADOOP-15348
 URL: https://issues.apache.org/jira/browse/HADOOP-15348
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0, 3.1.0
Reporter: Steve Loughran


TL;DR: we should have common storage statistics for bytes read and bytes 
written, and S3A should use them in its instrumentation and have enum names to 
match.

# in the S3AInputStream we call 
{{S3AInstrumentation.StreamStatistics.bytesRead(long)}}, which adds the amount 
to {{bytesRead}}, in a read(), readFully, or forward seek() reading in data
# and in {{S3AInstrumentation.mergeInputStreamStatistics}}, that is pulled into 
streamBytesRead.
# which has a Statistics name of ""stream_bytes_read"
# but that is served up in the Storage statistics as "STREAM_SEEK_BYTES_READ", 
which is the wrong name.
# and there isn't a common name for the counter across other filesystems.

For now: people can use the wrong name in the enum; we may want to think about 
retaining it when adding the correct name. And maybe add a 
@Evolving/@LimitedPrivate scope pair to the enum



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.

2018-03-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416122#comment-16416122
 ] 

Konstantin Shvachko commented on HADOOP-15325:
--

??I think that you are viewing the use of getPassword as a mechanism only to be 
used for old passwords that used to be stored in configuration and that any new 
ones should use the credential provider API directly instead.??

Hi [~lmccay], you are exactly right. And even though {{getPassowrd()}} is nice 
and convenient, we need some way to introduce new practice, which avoids adding 
passwords into configs. So {{public getPasswordFromCredentialsProvider()}} as 
you and [~jojochuang] suggested should serve this purpose well.
I am not in favor of adding the extra FALLBACK parameter because it is quite 
confusing / complicates configuration. I mean you really have to look in the 
code to understand what it means. And potentially read this discussion to 
understand the background.

I also think we should deprecate all existing password fields in current 
configuration and backport it into downstream versions, so that we could remove 
them some time in the future.

> Add an option to make Configuration.getPassword() not to fallback to read 
> passwords from configuration.
> ---
>
> Key: HADOOP-15325
> URL: https://issues.apache.org/jira/browse/HADOOP-15325
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Zsolt Venczel
>Priority: Major
>
> HADOOP-10607 added a public API Configuration.getPassword() which reads 
> passwords from credential provider and then falls back to reading from 
> configuration if one is not available.
> This API has been used throughout Hadoop codebase and downstream 
> applications. It is understandable for old password configuration keys to 
> fallback to configuration to maintain backward compatibility. But for new 
> configuration passwords that don't have legacy, there should be an option to 
> _not_ fallback, because storing passwords in configuration is considered a 
> bad security practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException

2018-03-27 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14758:

Status: Patch Available  (was: Open)

> S3GuardTool.prune to handle UnsupportedOperationException
> -
>
> Key: HADOOP-14758
> URL: https://issues.apache.org/jira/browse/HADOOP-14758
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-14758.001.patch
>
>
> {{MetadataStore.prune()}} may throw {{UnsupportedOperationException}} if not 
> supported.
> {{S3GuardTool.prune}} should recognise this, catch it and treat it 
> differently from any other failure, e.g. inform and return 0 as its a no-op



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException

2018-03-27 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14758:

Attachment: HADOOP-14758.001.patch

> S3GuardTool.prune to handle UnsupportedOperationException
> -
>
> Key: HADOOP-14758
> URL: https://issues.apache.org/jira/browse/HADOOP-14758
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-14758.001.patch
>
>
> {{MetadataStore.prune()}} may throw {{UnsupportedOperationException}} if not 
> supported.
> {{S3GuardTool.prune}} should recognise this, catch it and treat it 
> differently from any other failure, e.g. inform and return 0 as its a no-op



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14067:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch, HADOOP-14067.03.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-03-27 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415699#comment-16415699
 ] 

Sean Mackrory commented on HADOOP-14759:


I think this is a pretty good fix. I'll do some testing locally today if I can 
and commit. One nitpick I'd like to follow up on is the behavior of the Retries 
annotation once you start nesting calls. It's not the worst thing in the world 
if prune gets retried twice, but it's also easy to just have the wrapper 
function lose the annotation and only have it applied to the actual 
implementation all prune calls would use.

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-03-27 Thread Igor Dvorzhak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415696#comment-16415696
 ] 

Igor Dvorzhak commented on HADOOP-15124:


[~sseth], thanks for input! To be clear, attached patch preserves per-thread 
statistics.

[~ste...@apache.org], friendly ping to review this change.

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415673#comment-16415673
 ] 

genericqa commented on HADOOP-15344:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916388/HADOOP-15344.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 21c1cefde06e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c22d62b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14394/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14394/testReport/ |
| Max. process+thread count | 1531 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14394/console |
| Powered by | 

[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-27 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415651#comment-16415651
 ] 

Rushabh S Shah edited comment on HADOOP-14445 at 3/27/18 2:06 PM:
--

{quote}
2. providersCreated: Should this be a list or just KeyProvider ?
bq. I disagree because even in tests we should code against interface. 
{quote}

I think I was not clear earlier. 
What I meant was should it be a list of KeyProvider {{List}} or 
just single element {{KeyProvider}} ?
I agree with you completely that we should code against interface. Thats why I 
feel it should be just a {{KeyProvider}}.
{noformat}
KeyProvider keyProvider = KeyProviderFactory.get(providerUri, conf);
{noformat}
But I think its late now since the other jira is already committed.

bq. testTokenCompatibilityOldRenewer
Your comment does makes sense.
If we can test that new RM is able to renew both tokens (which is already 
present in your test suite  in last patch) and the identifier bits are the same 
in both tokens then we can remove this test case.
Hope it makes sense.


was (Author: shahrs87):
{quote}
2. providersCreated: Should this be a list or just KeyProvider ?
bq. I disagree because even in tests we should code against interface. 
{quote}

I think I was not clear earlier. 
What I meant was should it be a list of KeyProvider {{List}} or 
just single element {{KeyProvider}} ?
I agree with you completely that we should code against interface. Thats why I 
feel it should be just a {{KeyProvider}}.
{noformat}
KeyProvider keyProvider = KeyProviderFactory.get(providerUri, conf);
{noformat}

bq. testTokenCompatibilityOldRenewer
Your comment does makes sense.
If we can test that new RM is able to renew both tokens (which is already 
present in your test suite  in last patch) and the identifier bits are the same 
in both tokens then we can remove this test case.
Hope it makes sense.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-27 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415651#comment-16415651
 ] 

Rushabh S Shah edited comment on HADOOP-14445 at 3/27/18 2:05 PM:
--

{quote}
2. providersCreated: Should this be a list or just KeyProvider ?
bq. I disagree because even in tests we should code against interface. 
{quote}

I think I was not clear earlier. 
What I meant was should it be a list of KeyProvider {{List}} or 
just single element {{KeyProvider}} ?
I agree with you completely that we should code against interface. Thats why I 
feel it should be just a {{KeyProvider}}.
{noformat}
KeyProvider keyProvider = KeyProviderFactory.get(providerUri, conf);
{noformat}

bq. testTokenCompatibilityOldRenewer
Your comment does makes sense.
If we can test that new RM is able to renew both tokens (which is already 
present in your test suite  in last patch) and the identifier bits are the same 
in both tokens then we can remove this test case.
Hope it makes sense.


was (Author: shahrs87):
{quote}
2. providersCreated: Should this be a list or just KeyProvider ?
bq. I disagree because even in tests we should code against interface. 
{quote}

I think I was not clear earlier. 
What I meant was should it be a list of KeyProvider {{List}} or 
just single element {{KeyProvider}} ?
I agree with you completely that we should code against interface. Thats why I 
feel it should be just a {{KeyProvider}}.
{noformat}
KeyProvider keyProvider = KeyProviderFactory.get(providerUri, conf);
{noformat}

bq. testTokenCompatibilityOldRenewer
Your comment does makes sense.
Let me see if some other test does something smart to get around this problem.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-27 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415651#comment-16415651
 ] 

Rushabh S Shah commented on HADOOP-14445:
-

{quote}
2. providersCreated: Should this be a list or just KeyProvider ?
bq. I disagree because even in tests we should code against interface. 
{quote}

I think I was not clear earlier. 
What I meant was should it be a list of KeyProvider {{List}} or 
just single element {{KeyProvider}} ?
I agree with you completely that we should code against interface. Thats why I 
feel it should be just a {{KeyProvider}}.
{noformat}
KeyProvider keyProvider = KeyProviderFactory.get(providerUri, conf);
{noformat}

bq. testTokenCompatibilityOldRenewer
Your comment does makes sense.
Let me see if some other test does something smart to get around this problem.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415626#comment-16415626
 ] 

Rushabh S Shah commented on HADOOP-15344:
-

Now I am thinking whether we should do this change.
{{keyProvider#close}} is getting called from {{KMSTokenRenewer#close}}.

{code:java}
// Some comments here
  public long renew(Token token, Configuration conf) throws IOException {
{
 ...
 ...
 return ((KeyProviderDelegationTokenExtension.DelegationTokenExtension)
  keyProvider).renewDelegationToken(token);
} finally {
  if (keyProvider != null) {
keyProvider.close();
  }
}
}
{code}
 Lets assume  the {{keyProvider}} object is a {{LoadBalancingKMSCP}} comprising 
of 2 {{KMSCP}} and one {{KMSClientProvider}} is having problems while closing.
Since there are retries inside {{LoadBalancingKMSCP#doOp}}, 
{{keyProvider#renewDelegationToken(token)}} will retry after encountering bad 
{{KMSCP}} and it succeeds on second KMSCP but {{renew}} will throw an Exception 
in finally block which will fail the renew operation.
Hope it makes sense.
Let me know if I missed anything or if my analysis is incorrect.

> LoadBalancingKMSClientProvider#close should not swallow exceptions
> --
>
> Key: HADOOP-15344
> URL: https://issues.apache.org/jira/browse/HADOOP-15344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15344.001.patch
>
>
> As [~shahrs87]'s comment on HADOOP-14445 says:
> {quote}
> LoadBalancingKMSCP never throws IOException back. It just swallows all the 
> IOException and just logs it.
> ...
> Maybe we might want to return MultipleIOException from 
> LoadBalancingKMSCP#close. 
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15342:

Affects Version/s: 3.1.0
   3.0.0

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13617) Swift client retrying original request is using expired token after re-authentication

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415571#comment-16415571
 ] 

genericqa commented on HADOOP-13617:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-13617 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13617 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832334/HADOOP-13617.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14395/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Swift client retrying original request is using expired token after 
> re-authentication 
> --
>
> Key: HADOOP-13617
> URL: https://issues.apache.org/jira/browse/HADOOP-13617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
>Priority: Blocker
>  Labels: patch
> Attachments: 2016_09_13.stderrout.log, HADOOP-13617.patch
>
>
> library used: org.apache.hadoop:hadoop-openstack:2.6.0
> For long running Swift read operation (e.g., reading a large container), the 
> issued auth token has at most 30 minutes life span from Oracle Storage 
> Service. If the token expired in the middle of the read operation the 
> SwiftRestClient 
> (https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java#L1701)
>  re-authenticate and acquire a new auth token. However, in the retry request 
> the old, expired token is still used, causing the whole operation to fail.
> Because of this bug any meaningful(i.e., long-running) Swift operation is not 
> possible.
> Here is a summary of what happened with DEBUG logging turned on:
> ==
> 1. initial token acquired which will expire on 19:56:44(PDT; UTC-4):
> ---
> 2016-09-13 19:52:37 DEBUG [pool-3-thread-1] SwiftRestClient:268 - setAuth:
> endpoint=https://em2.storage.oraclecloud.com/v1/Storage-paas132;
> objectURI=https://em2.storage.oraclecloud.com/object_endpoint/null;
> token=AccessToken{id='AUTH_tk2dd9d639bbb992089dca008123c3046f',
> tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@af28493,
> expires='2016-09-13T23:56:44Z'}
> 2. token expiration and re-authentication:
> --
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - GET
> https://em2.storage.oraclecloud.com/v1/Storage-paas132/allTaxi/?prefix=000182/&format=json&delimiter=/
> X-Auth-Token: AUTH_tk2dd9d639bbb992089dca008123c3046f
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:44 WARN [pool-3-thread-1] HttpMethodDirector:697 - Unable
> to respond to any of these challenges: {token=Token}
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 401
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1698 -
> Reauthenticating
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1079 - started
> authentication
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1228 -
> Authenticating with Authenticate as tenant 'Storage-paas132' user
> 'radha.sriniva...@oracle.com' with password of length 9
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - POST
> https://em2.storage.oraclecloud.com/auth/v2.0/tokens
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 200
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1149 - Catalog
> entry [swift: object-store];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1156 - Found
> swift catalog as swift => object-store
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1169 - Endpoint
> [US => https://em2.storage.oraclecloud.com/v1/Storage-paas132 / null];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] Swift

[jira] [Commented] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-03-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415570#comment-16415570
 ] 

Steve Loughran commented on HADOOP-15342:
-

OK

I'll need: 
 * A patch of the pom
 * the name of the ADL endpoint against which you ran the entire 
hadoop-azuredatalake test suite.

Does it change any dependencies? 

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13617) Swift client retrying original request is using expired token after re-authentication

2018-03-27 Thread Jelmer Kuperus (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415569#comment-16415569
 ] 

Jelmer Kuperus commented on HADOOP-13617:
-

Is there any reason why this change was not applied ? We ran into this and was 
a bit surprised to find a patch readily available in a ticket from sept 2016

 

I would really appreciate it if it could be merged

> Swift client retrying original request is using expired token after 
> re-authentication 
> --
>
> Key: HADOOP-13617
> URL: https://issues.apache.org/jira/browse/HADOOP-13617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
>Priority: Blocker
>  Labels: patch
> Attachments: 2016_09_13.stderrout.log, HADOOP-13617.patch
>
>
> library used: org.apache.hadoop:hadoop-openstack:2.6.0
> For long running Swift read operation (e.g., reading a large container), the 
> issued auth token has at most 30 minutes life span from Oracle Storage 
> Service. If the token expired in the middle of the read operation the 
> SwiftRestClient 
> (https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java#L1701)
>  re-authenticate and acquire a new auth token. However, in the retry request 
> the old, expired token is still used, causing the whole operation to fail.
> Because of this bug any meaningful(i.e., long-running) Swift operation is not 
> possible.
> Here is a summary of what happened with DEBUG logging turned on:
> ==
> 1. initial token acquired which will expire on 19:56:44(PDT; UTC-4):
> ---
> 2016-09-13 19:52:37 DEBUG [pool-3-thread-1] SwiftRestClient:268 - setAuth:
> endpoint=https://em2.storage.oraclecloud.com/v1/Storage-paas132;
> objectURI=https://em2.storage.oraclecloud.com/object_endpoint/null;
> token=AccessToken{id='AUTH_tk2dd9d639bbb992089dca008123c3046f',
> tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@af28493,
> expires='2016-09-13T23:56:44Z'}
> 2. token expiration and re-authentication:
> --
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - GET
> https://em2.storage.oraclecloud.com/v1/Storage-paas132/allTaxi/?prefix=000182/&format=json&delimiter=/
> X-Auth-Token: AUTH_tk2dd9d639bbb992089dca008123c3046f
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:44 WARN [pool-3-thread-1] HttpMethodDirector:697 - Unable
> to respond to any of these challenges: {token=Token}
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 401
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1698 -
> Reauthenticating
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1079 - started
> authentication
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1228 -
> Authenticating with Authenticate as tenant 'Storage-paas132' user
> 'radha.sriniva...@oracle.com' with password of length 9
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - POST
> https://em2.storage.oraclecloud.com/auth/v2.0/tokens
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 200
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1149 - Catalog
> entry [swift: object-store];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1156 - Found
> swift catalog as swift => object-store
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1169 - Endpoint
> [US => https://em2.storage.oraclecloud.com/v1/Storage-paas132 / null];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:268 - setAuth:
> endpoint=https://em2.storage.oraclecloud.com/v1/Storage-paas132;
> objectURI=https://em2.storage.oraclecloud.com/object_endpoint/null;
> token=AccessToken{id='AUTH_tk56bbb4d6fef57b7eeba7acae598f837c',
> tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@4f03838d,
> expires='2016-09-14T00:26:45Z'}
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1216 -
> authenticated against https://em2.storage.oraclecloud.com/v1/Storage-paas132.
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - HEAD
> https://em2.storage.oraclecloud.com/v1/Storage-paas132/

[jira] [Commented] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2018-03-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415527#comment-16415527
 ] 

Steve Loughran commented on HADOOP-14831:
-

Actually my bad, I was referring to HADOOP-15107. Like I said, minor

 

w.r.t HADOOP-14937, someone needs to spend some time investigating that: the 
more people who can, the better, especially to see what thread pool settings 
can change. [~dmt] -have you got the time for this?

> Über-jira: S3a phase IV: Hadoop 3.1 features
> 
>
> Key: HADOOP-14831
> URL: https://issues.apache.org/jira/browse/HADOOP-14831
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15347) S3ARetryPolicy to handle AWS 500 responses with the throttle backoff policy

2018-03-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15347:
---

 Summary: S3ARetryPolicy to handle AWS 500 responses with the 
throttle backoff policy
 Key: HADOOP-15347
 URL: https://issues.apache.org/jira/browse/HADOOP-15347
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


FLINK-9061 implies that some 500 responses are caused by server-side overload 
of some form. 

That means they should really have the throttle retry policy applied, not the 
connectivity one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread fang zhenyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415496#comment-16415496
 ] 

fang zhenyi commented on HADOOP-15344:
--

Thanks [~xiaochen] and [~shahrs87] for reporting this. I hava attach 
\{{HADOOP-15344.001.patch}}.

> LoadBalancingKMSClientProvider#close should not swallow exceptions
> --
>
> Key: HADOOP-15344
> URL: https://issues.apache.org/jira/browse/HADOOP-15344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15344.001.patch
>
>
> As [~shahrs87]'s comment on HADOOP-14445 says:
> {quote}
> LoadBalancingKMSCP never throws IOException back. It just swallows all the 
> IOException and just logs it.
> ...
> Maybe we might want to return MultipleIOException from 
> LoadBalancingKMSCP#close. 
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15344:
-
Attachment: HADOOP-15344.001.patch

> LoadBalancingKMSClientProvider#close should not swallow exceptions
> --
>
> Key: HADOOP-15344
> URL: https://issues.apache.org/jira/browse/HADOOP-15344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15344.001.patch
>
>
> As [~shahrs87]'s comment on HADOOP-14445 says:
> {quote}
> LoadBalancingKMSCP never throws IOException back. It just swallows all the 
> IOException and just logs it.
> ...
> Maybe we might want to return MultipleIOException from 
> LoadBalancingKMSCP#close. 
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15344:
-
Status: Patch Available  (was: Open)

> LoadBalancingKMSClientProvider#close should not swallow exceptions
> --
>
> Key: HADOOP-15344
> URL: https://issues.apache.org/jira/browse/HADOOP-15344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15344.001.patch
>
>
> As [~shahrs87]'s comment on HADOOP-14445 says:
> {quote}
> LoadBalancingKMSCP never throws IOException back. It just swallows all the 
> IOException and just logs it.
> ...
> Maybe we might want to return MultipleIOException from 
> LoadBalancingKMSCP#close. 
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14825) Über-JIRA: S3Guard Phase II: Hadoop 3.1 features

2018-03-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14825.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

Closing as done for 3.1. Thanks for everyone who helped here

> Über-JIRA: S3Guard Phase II: Hadoop 3.1 features
> 
>
> Key: HADOOP-14825
> URL: https://issues.apache.org/jira/browse/HADOOP-14825
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
>
> JIRA covering everything we want/need for a phase II of S3Guard
> That includes
> * features
> * fixes
> * Everything which was open under the HADOOP-13345 JIRA when S3Guard was 
> committed to trunk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2018-03-27 Thread dmtran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415393#comment-16415393
 ] 

dmtran commented on HADOOP-14831:
-

I am personally not in a hurry and can wait for HADOOP-14937

Thanks a lot for working on these features !

> Über-jira: S3a phase IV: Hadoop 3.1 features
> 
>
> Key: HADOOP-14831
> URL: https://issues.apache.org/jira/browse/HADOOP-14831
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15264) AWS "shaded" SDK 1.11.271 is pulling in netty 4.1.17

2018-03-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15264:

Summary: AWS "shaded" SDK 1.11.271 is pulling in netty 4.1.17  (was: AWS 
"shaded" SDK 1.271 is pulling in netty 4.1.17)

> AWS "shaded" SDK 1.11.271 is pulling in netty 4.1.17
> 
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15346) S3ARetryPolicy for 400/BadArgument to be "fail"

2018-03-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15346:
---

 Summary: S3ARetryPolicy for 400/BadArgument to be "fail"
 Key: HADOOP-15346
 URL: https://issues.apache.org/jira/browse/HADOOP-15346
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The retry policy for the AWS 400/BadArgument response is currently "treat as a 
connectivity error" on the basis that sometimes it works again.

It doesn't, not normally, and by using the connectivity retry policy, 
unrecoverable failures can take time to surface.

Proposed: switch to a fail fast policy for BadArgumentException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415185#comment-16415185
 ] 

genericqa commented on HADOOP-14445:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 286 unchanged - 5 fixed = 287 total (was 291) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916337/HADOOP-14445.09.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 0cf88fe4dbe1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c22d62b |
| maven | ver

[jira] [Commented] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes

2018-03-27 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415136#comment-16415136
 ] 

genericqa commented on HADOOP-15345:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 64 unchanged - 1 fixed = 64 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 65 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverControllerStress |
|   | hadoop.util.bloom.TestBloomFilters |
|   | hadoop.ipc.TestCallQueueManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:06eafee |
| JIRA Issue | HADOOP-15345 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916342/HADOOP-15345-branch-2.7.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 90933f24cd54 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.7 / 94bb6fa |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14393/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14393/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14393/testReport/ |
| Max. process+thread count | 482 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14393/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-12185

[jira] [Assigned] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions

2018-03-27 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi reassigned HADOOP-15344:


Assignee: fang zhenyi

> LoadBalancingKMSClientProvider#close should not swallow exceptions
> --
>
> Key: HADOOP-15344
> URL: https://issues.apache.org/jira/browse/HADOOP-15344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Minor
>
> As [~shahrs87]'s comment on HADOOP-14445 says:
> {quote}
> LoadBalancingKMSCP never throws IOException back. It just swallows all the 
> IOException and just logs it.
> ...
> Maybe we might want to return MultipleIOException from 
> LoadBalancingKMSCP#close. 
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org