[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319836#comment-16319836
 ] 

genericqa commented on HADOOP-15158:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905403/HADOOP-15158.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5da977df18c9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98a2e6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13942/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13942/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://iss

[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15158:
-
Fix Version/s: 2.9.1

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Fix For: 2.9.1, 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319790#comment-16319790
 ] 

wujinhu commented on HADOOP-15158:
--

Attach patch! I add a test to cover those changed code lines.

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Fix For: 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15158:
-
Attachment: HADOOP-15158.003.patch

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Fix For: 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319774#comment-16319774
 ] 

genericqa commented on HADOOP-15158:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905396/HADOOP-15158.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3156392323dd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98a2e6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13941/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13941/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13941/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetu

[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-09 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15027:
---
Target Version/s: 3.1.0, 2.9.1, 3.0.1

> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15140) S3guard mistakes root URI without / as non-absolute path

2018-01-09 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine reassigned HADOOP-15140:
-

Assignee: Abraham Fine

> S3guard mistakes root URI without / as non-absolute path
> 
>
> Key: HADOOP-15140
> URL: https://issues.apache.org/jira/browse/HADOOP-15140
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>
> If you call {{getFileStatus("s3a://bucket")}} then S3Guard will throw an 
> exception in putMetadata, as it mistakes the empty path for "non-absolute 
> path"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15158:
-
Attachment: HADOOP-15158.002.patch

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Fix For: 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15158:
-
Fix Version/s: 3.0.1
   Status: Patch Available  (was: In Progress)

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Fix For: 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-01-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319594#comment-16319594
 ] 

genericqa commented on HADOOP-14445:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project: The patch generated 12 
new + 122 unchanged - 4 fixed = 134 total (was 126) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905371/HADOOP-14445.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 65541762e658 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8ee7080 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/

[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319502#comment-16319502
 ] 

Aaron Fabbri commented on HADOOP-15141:
---

Javac are deprecation warnings on AWS API.

Checkstyle shows a couple legitimate issues that could be fixed (break and wrap 
string literals).

I mentioned a minor typo in my previous comment that is still there.

I'm awaiting permissions from the IT gods to be able to assume role then I'll 
have test results.

{noformat}
+Trying to learn how IAM Assumed Roles work by debugging stack traces from
+the S3A client is "suboptimal".
{noformat}

Haha. Except for us--it is The Right Thing To Do (tm).  Need those stack traces.

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15140) S3guard mistakes root URI without / as non-absolute path

2018-01-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319489#comment-16319489
 ] 

Aaron Fabbri commented on HADOOP-15140:
---

I believe [~abrahamfine] was looking for noob JIRAs to become familiar with 
Hadoop upstream dev.  How about this one?

> S3guard mistakes root URI without / as non-absolute path
> 
>
> Key: HADOOP-15140
> URL: https://issues.apache.org/jira/browse/HADOOP-15140
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> If you call {{getFileStatus("s3a://bucket")}} then S3Guard will throw an 
> exception in putMetadata, as it mistakes the empty path for "non-absolute 
> path"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-01-09 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14445:

Attachment: HADOOP-14445.003.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, HADOOP-14445.003.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2018-01-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319465#comment-16319465
 ] 

Yufei Gu commented on HADOOP-15060:
---

[~miklos.szeg...@cloudera.com], Thanks for working on this. +1 for the patch. 

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7553.000.patch
>
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319057#comment-16319057
 ] 

genericqa commented on HADOOP-15141:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 39s{color} 
| {color:red} root generated 2 new + 1240 unchanged - 0 fixed = 1242 total (was 
1240) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 3 new + 16 unchanged - 
0 fixed = 19 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15141 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905317/HADOOP-15141-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux ad5d71892f53 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personalit

[jira] [Commented] (HADOOP-15160) Confusing text in http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

2018-01-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318994#comment-16318994
 ] 

Daniel Templeton commented on HADOOP-15160:
---

Thanks for the heads up, [~steve_l].  I agree that it sounds like that line 
ended up under the wrong heading.  The duplicate headings also give mo pause.  
I'll have to dig in to see where exactly that went south, but I'm on vacation 
until next Tuesday.  I'll take a look then.  Feel free to assign it to me.

> Confusing text in 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
> 
>
> Key: HADOOP-15160
> URL: https://issues.apache.org/jira/browse/HADOOP-15160
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Jim Showalter
>Priority: Minor
>
> The text in wire formats, policy, is confusing.
> First, there are two subsections with the same heading:
> The following changes to a .proto file SHALL be considered incompatible:
> The following changes to a .proto file SHALL be considered incompatible:
> Second, one of the items listed under the first of those two headings seems 
> like it is a compatible change, not an incompatible change:
> Delete an optional field as long as the optional field has reasonable 
> defaults to allow deletions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15161) s3a: Stream and common statistics missing from metrics

2018-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318972#comment-16318972
 ] 

Hudson commented on HADOOP-15161:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13471 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13471/])
HADOOP-15161. s3a: Stream and common statistics missing from metrics (stevel: 
rev b62a5ece95a6b5bbb17f273debd55bcbf0c5f28c)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMetrics.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> s3a: Stream and common statistics missing from metrics
> --
>
> Key: HADOOP-15161
> URL: https://issues.apache.org/jira/browse/HADOOP-15161
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 3.1.0
>
> Attachments: HADOOP-15161.001.patch, HADOOP-15161.002.patch
>
>
> Input stream statistics aren't being passed through to metrics once merged. 
> Also, the following "common statistics" are not being incremented or tracked 
> by metrics:
> {code}
> OP_APPEND
> OP_CREATE
> OP_CREATE_NON_RECURSIVE
> OP_DELETE
> OP_GET_CONTENT_SUMMARY
> OP_GET_FILE_CHECKSUM
> OP_GET_STATUS
> OP_MODIFY_ACL_ENTRIES
> OP_OPEN
> OP_REMOVE_ACL
> OP_REMOVE_ACL_ENTRIES
> OP_REMOVE_DEFAULT_ACL
> OP_SET_ACL
> OP_SET_OWNER
> OP_SET_PERMISSION
> OP_SET_TIMES
> OP_TRUNCATE
> {code}
> Most of those make sense, but we can easily add OP_CREATE (and it's 
> non-recursive cousin), OP_DELETE, OP_OPEN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318949#comment-16318949
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160495383
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
 ---
@@ -86,7 +86,15 @@
*/
   @Deprecated
   public static boolean isJava7OrAbove() {
-return true;
+return isJavaSpecAtLeast(7);
+  }
+
+  // "1.8"->8, "9"->9, "10"->10
+  private static final int JAVA_SPEC_VER = Math.max(8, Integer.parseInt(
+  System.getProperty("java.specification.version").split("\\.")[0]));
+
+  public static boolean isJavaSpecAtLeast(int version) {
--- End diff --

1. Can we call this `isJavaVersionAtLeast` ? 
1. + javadoc explaining what it does
1. Needs a test in `TestShel` whichcalls the new API operation & sees that 
it is consistent with its expectations (i.e on trunk it must be 8+)


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318947#comment-16318947
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160495013
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
 ---
@@ -86,7 +86,15 @@
*/
   @Deprecated
   public static boolean isJava7OrAbove() {
-return true;
+return isJavaSpecAtLeast(7);
--- End diff --

no, this can stay true. We don't support old stuff any more

What would be good is to have a test in {{TestShell}} which calls the new 
API operation & sees that it is consistent with its expectations (i.e on trunk 
it must be 8+)


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318942#comment-16318942
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160494516
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -78,6 +87,18 @@ public static Checksum newCrc32() {
 return new CRC32();
   }
 
+  public static Checksum newCrc32C() {
--- End diff --

again, private. Comment should say "use a volatile to avoid a lock here; 
re-entrancy unlikely  except in failure mode (and inexpensive)


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318936#comment-16318936
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160494025
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -43,6 +49,9 @@
   public static final int CHECKSUM_CRC32C  = 2;
   public static final int CHECKSUM_DEFAULT = 3; 
   public static final int CHECKSUM_MIXED   = 4;
+
+  public static final Logger LOG = 
LoggerFactory.getLogger(DataChecksum.class);
--- End diff --

make private


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15161) s3a: Stream and common statistics missing from metrics

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15161:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14831

> s3a: Stream and common statistics missing from metrics
> --
>
> Key: HADOOP-15161
> URL: https://issues.apache.org/jira/browse/HADOOP-15161
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 3.1.0
>
> Attachments: HADOOP-15161.001.patch, HADOOP-15161.002.patch
>
>
> Input stream statistics aren't being passed through to metrics once merged. 
> Also, the following "common statistics" are not being incremented or tracked 
> by metrics:
> {code}
> OP_APPEND
> OP_CREATE
> OP_CREATE_NON_RECURSIVE
> OP_DELETE
> OP_GET_CONTENT_SUMMARY
> OP_GET_FILE_CHECKSUM
> OP_GET_STATUS
> OP_MODIFY_ACL_ENTRIES
> OP_OPEN
> OP_REMOVE_ACL
> OP_REMOVE_ACL_ENTRIES
> OP_REMOVE_DEFAULT_ACL
> OP_SET_ACL
> OP_SET_OWNER
> OP_SET_PERMISSION
> OP_SET_TIMES
> OP_TRUNCATE
> {code}
> Most of those make sense, but we can easily add OP_CREATE (and it's 
> non-recursive cousin), OP_DELETE, OP_OPEN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15161) s3a: Stream and common statistics missing from metrics

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15161:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

+1, committed to 3.1 with that last checkstyle fix on the way. Thanks

This wouldn't cherrypick back to branch-3.0; if you need it there, re-open this 
JIRA with a new patch



> s3a: Stream and common statistics missing from metrics
> --
>
> Key: HADOOP-15161
> URL: https://issues.apache.org/jira/browse/HADOOP-15161
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 3.1.0
>
> Attachments: HADOOP-15161.001.patch, HADOOP-15161.002.patch
>
>
> Input stream statistics aren't being passed through to metrics once merged. 
> Also, the following "common statistics" are not being incremented or tracked 
> by metrics:
> {code}
> OP_APPEND
> OP_CREATE
> OP_CREATE_NON_RECURSIVE
> OP_DELETE
> OP_GET_CONTENT_SUMMARY
> OP_GET_FILE_CHECKSUM
> OP_GET_STATUS
> OP_MODIFY_ACL_ENTRIES
> OP_OPEN
> OP_REMOVE_ACL
> OP_REMOVE_ACL_ENTRIES
> OP_REMOVE_DEFAULT_ACL
> OP_SET_ACL
> OP_SET_OWNER
> OP_SET_PERMISSION
> OP_SET_TIMES
> OP_TRUNCATE
> {code}
> Most of those make sense, but we can easily add OP_CREATE (and it's 
> non-recursive cousin), OP_DELETE, OP_OPEN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15160) Confusing text in http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318904#comment-16318904
 ] 

Steve Loughran commented on HADOOP-15160:
-

[~dan...@cloudera.com] will  have some opinions here

> Confusing text in 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
> 
>
> Key: HADOOP-15160
> URL: https://issues.apache.org/jira/browse/HADOOP-15160
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Jim Showalter
>Priority: Minor
>
> The text in wire formats, policy, is confusing.
> First, there are two subsections with the same heading:
> The following changes to a .proto file SHALL be considered incompatible:
> The following changes to a .proto file SHALL be considered incompatible:
> Second, one of the items listed under the first of those two headings seems 
> like it is a compatible change, not an incompatible change:
> Delete an optional field as long as the optional field has reasonable 
> defaults to allow deletions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318900#comment-16318900
 ] 

Steve Loughran commented on HADOOP-15158:
-

Changing the constructor is the simplest change for now; that could go in to 
let people use this while a new role specific one gets developed

And: it's better to stabilise the constructor arguments now, with something 
which can be backported
 
Is there any new test which can be added?


> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15158.001.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15153) [branch-2.8] Increase heap memory to avoid the OOM in pre-commit

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318885#comment-16318885
 ] 

Steve Loughran commented on HADOOP-15153:
-

bq. branch-2's version of surefire doesn't kill timed out unit tests properly.

Sounds like we should upgrade surefire, unless there's a good reason not to

> [branch-2.8] Increase heap memory to avoid the OOM in pre-commit
> 
>
> Key: HADOOP-15153
> URL: https://issues.apache.org/jira/browse/HADOOP-15153
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-15153-branch-2.8.patch
>
>
> Refernce:
> https://builds.apache.org/job/PreCommit-HDFS-Build/22528/consoleFull
> https://builds.apache.org/job/PreCommit-HDFS-Build/22528/artifact/out/branch-mvninstall-root.txt
> {noformat}
> [ERROR] unable to create new native thread -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15148) Improve DataOutputByteBuffer

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318882#comment-16318882
 ] 

Steve Loughran commented on HADOOP-15148:
-

I'd go for purging it, unless there's clear evidence of it being used. In which 
case marking as deprecated and telling people to stop using it is probably 
better than maintenance

> Improve DataOutputByteBuffer
> 
>
> Key: HADOOP-15148
> URL: https://issues.apache.org/jira/browse/HADOOP-15148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-15148.1.patch
>
>
> * Use ArrayDeque instead of LinkedList
> * Replace an ArrayList that was being used as a queue with ArrayDeque
> * Improve write single byte method to hard-code sizes and save time
> {quote}
> Resizable-array implementation of the Deque interface. Array deques have no 
> capacity restrictions; they grow as necessary to support usage. They are not 
> thread-safe; in the absence of external synchronization, they do not support 
> concurrent access by multiple threads. Null elements are prohibited. This 
> class is *likely to be* ... *faster than LinkedList when used as a queue.*
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15146) Remove DataOutputByteBuffer

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318878#comment-16318878
 ] 

Steve Loughran commented on HADOOP-15146:
-

LGTM, something obsoleted by java.nio

# Are we confident it's not being used by downstream code? i.e if you cut this, 
what breaks in a downstream build of hbase and hive?
# the patch is moving imports around, which is almost always a -1, as it breaks 
the other patches. Can you make sure your IDE is positioning imports as our 
style rules expect, and not trying to "be helpful"

thanks

> Remove DataOutputByteBuffer
> ---
>
> Key: HADOOP-15146
> URL: https://issues.apache.org/jira/browse/HADOOP-15146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15146.1.patch, HADOOP-15146.2.patch, 
> HADOOP-15146.3.patch
>
>
> I can't seem to find any references to {{DataOutputByteBuffer}} maybe it 
> should be deprecated or simply removed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15142) Register FTP and SFTP as FS services

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318868#comment-16318868
 ] 

Steve Loughran commented on HADOOP-15142:
-

1. I think where were going with ftp/sftp is retire the current set and switch 
to the one proposed HADOOP-1. That's lacking in reviews by people who have 
promised to test it (myself included!), and, because its going against 
different FTP systems, needs lots of cross system testing

Given you are using ftp, why not play with that.

Regarding the patch, the FS is registered in core-default.xml as fs.ftp.impl, 
and I think it should stay that way, given the history of problems we've had 
with the dynamic registration and classpaths.

 I don't see anything related to sftp in core-default. It needs a reference in 
the file, which is something we can backport to those branches which won't get 
the new FTP client. Would you like just to do that, rather than the dynamic 
bit? The scheme bit you could leave in for consistency, even though it won't 
get used

> Register FTP and SFTP as FS services
> 
>
> Key: HADOOP-15142
> URL: https://issues.apache.org/jira/browse/HADOOP-15142
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.0.0
>Reporter: Mario Molina
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HADOOP-15142.001.patch
>
>
> SFTPFileSystem and FTPFileSystem are not registered as a FS services.
> When calling the 'get' or 'newInstance' methods of the FileSystem class, the 
> FS instance cannot be created due to the schema is not registered as a 
> service FS.
> Also, the SFTPFileSystem class doesn't have the getScheme method implemented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Attachment: HADOOP-15141-004.patch

Patch 004; wrap up all the tests that I can think of.

 Being able to restrict permissions in tests is interesting, as it means that 
given a role ARN with the normal R/W permissions, we could have tests which 
assume it but with a restricted policy, such as read only access, or RW to S3 
but no DDB access to see what s3guard does. A test team could have fun here.

* Tests for session names + stack trace to troubleshooting if an invalid string 
is passed in
* added a test for a restrictive policy and expecting IO to fail. 
* factored out duplication in tests for a tighter set of tests, and then added 
a description for them all
* Fixed S3AFS.toString() to not NPE when the FS is unintialized, and added a 
test for this regular regression. (Found during debugging)
* improved error message on (getFileStatus "/") to include that path, as it was 
just including "" as the path, which is useless.

Now you get 
{code}
java.nio.file.AccessDeniedException: s3a://hwdev-steve-ireland-new/: 
getFileStatus on s3a://hwdev-steve-ireland-new/: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
F57E52775EF3A83F; S3 Extended Request ID: 
tUs++zZ9bzNeBhT3608lk44o74uSr/JPvJw+x2inFtHFCtzvPAi3RmVaZPbwQPVH0klquaYhs1c=), 
S3 Extended Request ID: 
tUs++zZ9bzNeBhT3608lk44o74uSr/JPvJw+x2inFtHFCtzvPAi3RmVaZPbwQPVH0klquaYhs1c=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
{code}


Tested: S3 ireland

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Status: Patch Available  (was: Open)

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-01-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318847#comment-16318847
 ] 

Bharat Viswanadham commented on HADOOP-9747:


[~daryn] Thank you for the update.
My intention is just to make progress on this jira, as I have not heard back 
from you.
I will wait for your patch.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15162) UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE

2018-01-09 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318749#comment-16318749
 ] 

Eric Yang commented on HADOOP-15162:


[~daryn] {quote}
Are you writing your own custom http server and authentication filter?
{quote}

No.  This JIRA serves the purpose to provide information for less experienced 
developer to understand proxy ACL must be verified to enable perimeter 
security.  Code written as:

{code}
proxyUser = UserGroupInformation.getLoginUser();
ugi = UserGroupInformation
.createProxyUser(remoteUser, proxyUser);
{code}

Without using UGI.createRemoteUser(remoteUser) is equally good.  There is no 
need of isSecurityEnabled() check, and there is no need of explicitly call 
UGI.createRemoteUser(remoteUser).  User only get to shoot themselves in the 
foot, if {{hadoop.http.authentication.simple.anonymous.allowed}} is 
misconfigured which allow anyone to impersonate as someone else.  I would 
propose to deprecate createRemoteUser(remoteUser) API because it creates 
confusion on how code should be written.

> UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE
> --
>
> Key: HADOOP-15162
> URL: https://issues.apache.org/jira/browse/HADOOP-15162
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Eric Yang
>
> {{UserGroupInformation.createRemoteUser(String user)}} is hard coded 
> Authentication method to SIMPLE by HADOOP-10683.  This by passed proxyuser 
> ACL check, isSecurityEnabled check, and allow caller to impersonate as 
> anyone.  This method could be abused in the main code base, which can cause 
> part of Hadoop to become insecure without proxyuser check for both SIMPLE or 
> Kerberos enabled environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318649#comment-16318649
 ] 

Steve Loughran commented on HADOOP-15141:
-

HADOOP-15141 patch 003
* Fix up doc duplication
* Generate some more stack traces; add in docs & tests
* Removed S3A_ prefix from new constants
* Subclass the ITestS3AContractDistCp test into 
ITestS3AContractDistCpAssumedRole, which runs under assumed roles if the ARN 
for one is defined, and the FS isn't already running under assumed roles.

Testing: S3A ireland with/without s3guard, and with/without assumed roles set 
for the entire suite. This includes making sure that all is well when there 
isn't an assumed role option set for the test run.

If anyone testing this gets some new stack traces, they should go into the 
troubleshooting. I think we should really have
* bad inner auth (how is that presented?). Should just be the normal error.
* What happens if you are authenticated with session tokens and try to get role 
credentials
* bad ref to the STS endpoint




> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Attachment: HADOOP-15141-003.patch

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Status: Open  (was: Patch Available)

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-01-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318602#comment-16318602
 ] 

Daryn Sharp commented on HADOOP-9747:
-

I understand this patch is critical to you but please stop hijacking.  I'm 
working on this today and should have a patch by EOD.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15162) UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE

2018-01-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318599#comment-16318599
 ] 

Daryn Sharp commented on HADOOP-15162:
--

bq. Proxy user credential should be verified if it can impersonate.
_There are no credentials_ with security disabled but a proxy user is verified 
if the client reported it's a proxy user – for http rest services via the doAs 
parameter.

bq. In my usage, I am writing a component for YARN, and end user credential is 
verified in http request.
It is verified and you have nothing to do if you use the standard HttpServer 
and authentication filters.

bq. If code is written as UGI.createRemoteUser(remoteUser), should there be a 
check to determine if the current service user can proxy? Some Hadoop PMC told 
me no because they assumed isSecurityEnabled == false, there should be no proxy 
ACL check.
Of course it should be verified and as I keep stressing it is verified.  I 
think the PMC gave you bad advice and/or didn't understand the context.

bq. If this type of assumption is applied, then we will have components talking 
to other components without honoring proxy user ACL, and leading to part of 
Hadoop being completely insecure.
This boggles me.  You are arguing: "oh no! my insecure server is completely 
insecure!"

bq. The server should decide which authentication method to use, setup 
authentication method and verify proxy ACL explicitly.
It already does.  What am I missing?  Are you writing your own custom http 
server and authentication filter?

Let's conclude this discussion.  Specifically, what existing code are you 
proposing be changed and how?  Post a patch.

> UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE
> --
>
> Key: HADOOP-15162
> URL: https://issues.apache.org/jira/browse/HADOOP-15162
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Eric Yang
>
> {{UserGroupInformation.createRemoteUser(String user)}} is hard coded 
> Authentication method to SIMPLE by HADOOP-10683.  This by passed proxyuser 
> ACL check, isSecurityEnabled check, and allow caller to impersonate as 
> anyone.  This method could be abused in the main code base, which can cause 
> part of Hadoop to become insecure without proxyuser check for both SIMPLE or 
> Kerberos enabled environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14852) Intermittent failure of S3Guard TestConsistencyListFiles

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318595#comment-16318595
 ] 

Steve Loughran commented on HADOOP-14852:
-

Notable that it's happening in the (2, 2, 1, false) outcome again: that's the 
largest set under test. We could expand that a bit to see if the problem grows, 
such as adding 4 and 8 scale options

> Intermittent failure of S3Guard TestConsistencyListFiles
> 
>
> Key: HADOOP-14852
> URL: https://issues.apache.org/jira/browse/HADOOP-14852
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>
> I'm seeing intermittent test failures with a test run of {{ -Dparallel-tests 
> -DtestsThreadCount=8 -Ds3guard -Ddynamo}}  (-Dauth set or unset) in which a 
> file in DELAY-LISTING-ME isn't being returned in a listing. 
> Theories
> * test is wrong
> * config is wrong
> * code is wrong



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14852) Intermittent failure of S3Guard TestConsistencyListFiles

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318590#comment-16318590
 ] 

Steve Loughran commented on HADOOP-14852:
-

I think we could handle a failure here better by
* dump the entire listing of files & print
* have the intermittent client print out its state

> Intermittent failure of S3Guard TestConsistencyListFiles
> 
>
> Key: HADOOP-14852
> URL: https://issues.apache.org/jira/browse/HADOOP-14852
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>
> I'm seeing intermittent test failures with a test run of {{ -Dparallel-tests 
> -DtestsThreadCount=8 -Ds3guard -Ddynamo}}  (-Dauth set or unset) in which a 
> file in DELAY-LISTING-ME isn't being returned in a listing. 
> Theories
> * test is wrong
> * config is wrong
> * code is wrong



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14852) Intermittent failure of S3Guard TestConsistencyListFiles

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318588#comment-16318588
 ] 

Steve Loughran commented on HADOOP-14852:
-

Seen this again
{code}
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.823 s 
- in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.82 s 
- in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.281 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 334.686 
s - in org.apache.hadoop.fs.s3a.commit.magic.ITestMagicCommitProtocol
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.873 
s - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List
[INFO] Tests run: 43, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.274 
s - in org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract
[ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 152.859 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency
[ERROR] 
testConsistentListFiles(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency)  
Time elapsed: 73.947 s  <<< FAILURE!
java.lang.AssertionError: 
s3a://hwdev-steve-ireland-new/fork-0008/test/doTestListFiles-2-2-1-false/file-2-DELAY_LISTING_ME
 should have been listed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.verifyFileIsListed(ITestS3GuardListConsistency.java:466)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.doTestListFiles(ITestS3GuardListConsistency.java:449)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListFiles(ITestS3GuardListConsistency.java:370)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

[WARNING] Tests run: 62, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 
357.487 s - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
[INFO] 
[INFO] Results:
[INFO] 
{code}

> Intermittent failure of S3Guard TestConsistencyListFiles
> 
>
> Key: HADOOP-14852
> URL: https://issues.apache.org/jira/browse/HADOOP-14852
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>
> I'm seeing intermittent test failures with a test run of {{ -Dparallel-tests 
> -DtestsThreadCount=8 -Ds3guard -Ddynamo}}  (-Dauth set or unset) in which a 
> file in DELAY-LISTING-ME isn't being returned in a listing. 
> Theories
> * test is wrong
> * config is wrong
> * code is wrong



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15079:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14831

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S

[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16318247#comment-16318247
 ] 

Steve Loughran commented on HADOOP-15141:
-

I'd completely forgotten about the assumed_roles.md doc over xmas! will fix!

Failure is HADOOP-15079; I'll do a patch for that too

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org