[jira] [Updated] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15294:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~tasanuma0829]!

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-13 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge reassigned HADOOP-15312:
-

Assignee: LiXin Ge

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-13 Thread LiXin Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398049#comment-16398049
 ] 

LiXin Ge commented on HADOOP-15312:
---

Thanks [~jojochuang] for filing this. I would like to work on this and get more 
understanding of it.

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15308:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.2
   3.2.0
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch, 
> HADOOP-15308.002.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15308:
-
Attachment: HADOOP-15308.002.patch

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch, 
> HADOOP-15308.002.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398011#comment-16398011
 ] 

Íñigo Goiri commented on HADOOP-15308:
--

[^HADOOP-15308.001.patch] looks clean.
+1
Committing all the way to 2.9.

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398006#comment-16398006
 ] 

Akira Ajisaka commented on HADOOP-15294:


LGTM, +1

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-13 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15305:
-
Attachment: HADOOP-15305.002.patch

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-13 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15305:
-
Status: Patch Available  (was: In Progress)

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-13 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15305:
-
Status: In Progress  (was: Patch Available)

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397997#comment-16397997
 ] 

genericqa commented on HADOOP-15308:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15308 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914380/HADOOP-15308.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9679b042a4a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b167d60 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14305/testReport/ |
| Max. process+thread count | 1767 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14305/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> 

[jira] [Commented] (HADOOP-15153) [branch-2.8] Increase heap memory to avoid the OOM in pre-commit

2018-03-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397970#comment-16397970
 ] 

Brahma Reddy Battula commented on HADOOP-15153:
---

thanks [~chris.douglas] and others for useful discussions.

> [branch-2.8] Increase heap memory to avoid the OOM in pre-commit
> 
>
> Key: HADOOP-15153
> URL: https://issues.apache.org/jira/browse/HADOOP-15153
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15153-branch-2.8.patch
>
>
> Refernce:
> https://builds.apache.org/job/PreCommit-HDFS-Build/22528/consoleFull
> https://builds.apache.org/job/PreCommit-HDFS-Build/22528/artifact/out/branch-mvninstall-root.txt
> {noformat}
> [ERROR] unable to create new native thread -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397946#comment-16397946
 ] 

Akira Ajisaka commented on HADOOP-15305:


Thanks [~zhenyi]. Would you remove unused import from 
TestHttpFSServerWebServer.java? I'm +1 if that is addressed.

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15305.001.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397889#comment-16397889
 ] 

Íñigo Goiri commented on HADOOP-15308:
--

[^HADOOP-15308.001.patch] LGTM.
I'll commit once Yetus comes back.
Thank you [~surmountian]!

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15308:

Attachment: HADOOP-15308.001.patch

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15308:
-
Attachment: (was: HADOOP-15308.001.patch)

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15308:

Attachment: HADOOP-15308.001.patch

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397866#comment-16397866
 ] 

Xiao Chen edited comment on HADOOP-14445 at 3/13/18 11:48 PM:
--

Thanks for the review [~jojochuang], good comments! Also looking forward to 
[~daryn]'s review. Appreciate the review cycles.

bq. Why was KerberosConfiguration removed in the patch?
I was confused when adding tests and found that it's not used anywhere. Added 
it back, can have the removal done in a separate jira for cleanness.

bq. close the KeyProviders in TestKMS... in the initial test code ...
Good catch. I think this was missed in day 0 tests. Handled in this patch for 
review convenience, but created HADOOP-15313 for cleanness..

All other comments are addressed in, and good catch on the duplicate test 
method. Indeed client versions are hard to manage - the config is only a way to 
not duplicate tokens once we're sure everything is upgraded. I added more text 
into core-default.xml to explain, and will add similar lines to release notes 
once this is in. Didn't add to documentation because I fear this would confuse 
average users when they see that from documentation...


was (Author: xiaochen):
Thanks for the review [~jojochuang], good comments! Also looking forward to 
[~daryn]'s review. Appreciate the review cycles.

bq. Why was KerberosConfiguration removed in the patch?
I was confused when adding tests and found that it's not used anywhere. Added 
it back, can have the removal done in a separate jira for cleanness.

bq. close the KeyProviders in TestKMS... in the initial test code ...
Good catch. I think this was missed in day 0 tests. Handled in this patch for 
review convenience, but created HADOOP-13513 for cleanness..

All other comments are addressed in, and good catch on the duplicate test 
method. Indeed client versions are hard to manage - the config is only a way to 
not duplicate tokens once we're sure everything is upgraded. I added more text 
into core-default.xml to explain, and will add similar lines to release notes 
once this is in. Didn't add to documentation because I fear this would confuse 
average users when they see that from documentation...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397866#comment-16397866
 ] 

Xiao Chen edited comment on HADOOP-14445 at 3/13/18 11:48 PM:
--

Thanks for the review [~jojochuang], good comments! Also looking forward to 
[~daryn]'s review. Appreciate the review cycles.

bq. Why was KerberosConfiguration removed in the patch?
I was confused when adding tests and found that it's not used anywhere. Added 
it back, can have the removal done in a separate jira for cleanness.

bq. close the KeyProviders in TestKMS... in the initial test code ...
Good catch. I think this was missed in day 0 tests. Handled in this patch for 
review convenience, but created HADOOP-13513 for cleanness..

All other comments are addressed in, and good catch on the duplicate test 
method. Indeed client versions are hard to manage - the config is only a way to 
not duplicate tokens once we're sure everything is upgraded. I added more text 
into core-default.xml to explain, and will add similar lines to release notes 
once this is in. Didn't add to documentation because I fear this would confuse 
average users when they see that from documentation...


was (Author: xiaochen):
Thanks for the review [~jojochuang], good comments! Also looking forward to 
[~daryn]'s review. Appreciate the review cycles.

bq. Why was KerberosConfiguration removed in the patch?
I was confused when adding tests and found that it's not used anywhere. Added 
it back, can have the removal done in a separate jira for cleanness.

bq. close the KeyProviders in TestKMS... in the initial test code ...
Good catch. I think this was missed in day 0 tests. Handled in this patch for 
review convenience, but will create a separate jira for it.

All other comments are addressed in, and good catch on the duplicate test 
method. Indeed client versions are hard to manage - the config is only a way to 
not duplicate tokens once we're sure everything is upgraded. I added more text 
into core-default.xml to explain, and will add similar lines to release notes 
once this is in. Didn't add to documentation because I fear this would confuse 
average users when they see that from documentation...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15313) TestKMS should close providers

2018-03-13 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-15313:
--

 Summary: TestKMS should close providers
 Key: HADOOP-15313
 URL: https://issues.apache.org/jira/browse/HADOOP-15313
 Project: Hadoop Common
  Issue Type: Test
  Components: kms, test
Reporter: Xiao Chen
Assignee: Xiao Chen


During the review of HADOOP-14445, [~jojochuang] found that we key providers 
are not closed in tests. Details in [this 
comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16397824=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16397824].

We should investigate and handle that in all related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397866#comment-16397866
 ] 

Xiao Chen commented on HADOOP-14445:


Thanks for the review [~jojochuang], good comments! Also looking forward to 
[~daryn]'s review. Appreciate the review cycles.

bq. Why was KerberosConfiguration removed in the patch?
I was confused when adding tests and found that it's not used anywhere. Added 
it back, can have the removal done in a separate jira for cleanness.

bq. close the KeyProviders in TestKMS... in the initial test code ...
Good catch. I think this was missed in day 0 tests. Handled in this patch for 
review convenience, but will create a separate jira for it.

All other comments are addressed in, and good catch on the duplicate test 
method. Indeed client versions are hard to manage - the config is only a way to 
not duplicate tokens once we're sure everything is upgraded. I added more text 
into core-default.xml to explain, and will add similar lines to release notes 
once this is in. Didn't add to documentation because I fear this would confuse 
average users when they see that from documentation...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397848#comment-16397848
 ] 

Íñigo Goiri commented on HADOOP-15308:
--

The unit tests run correctly as one can see 
[here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14304/testReport/org.apache.hadoop.conf/TestConfiguration/].
[~surmountian], you tested this on Windows, clean results now?
Can you fix the checkstyles?
The GenericTestUtils should be fixed in the source but I think this is fine 
here.
Other than that, +1.



> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397828#comment-16397828
 ] 

genericqa commented on HADOOP-15308:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 106 unchanged - 0 fixed = 109 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15308 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914326/HADOOP-15308.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 961163596a91 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d6994d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14304/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14304/testReport/ |
| Max. process+thread count | 1349 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14304/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397824#comment-16397824
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

And I just found two undocumented configuration keys: 
hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher. 
Filed HADOOP-15312 for that.

Also, you probably want to close the KeyProviders in TestKMS test cases. Use 
try .. with clauses should do it. (My bad. I didn't close those KPs in the 
initial test code which you inherited)

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-03-13 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15312:


 Summary: Undocumented KeyProvider configuration keys
 Key: HADOOP-15312
 URL: https://issues.apache.org/jira/browse/HADOOP-15312
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


Via HADOOP-14445, I found two undocumented configuration keys: 
hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397767#comment-16397767
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

# Why was KerberosConfiguration removed in the patch?
 # Looks like
{code:java}
private void testDelegationTokenAccess(File testDir, String keyName,
 boolean submitterConfValue, boolean taskConfValue) throws Exception{code}
 in TestKMS is not used. (There are two testDelegationTokenAccess(), one is 
public, the other is private)

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397732#comment-16397732
 ] 

Hudson commented on HADOOP-15311:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13831 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13831/])
HADOOP-15311. HttpServer2 needs a way to configure the acceptor/selector 
(cdouglas: rev 9d6994da1964c1125a33b3a65e7a7747e2d0bc59)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397706#comment-16397706
 ] 

Erik Krogen commented on HADOOP-15311:
--

Thanks Chris!

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15308:
-
Status: Patch Available  (was: Open)

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14813) Windows build fails "command line too long"

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397684#comment-16397684
 ] 

Íñigo Goiri commented on HADOOP-14813:
--

Thanks [~ste...@apache.org], that works for me for now.
Is there a longer term solution?

> Windows build fails "command line too long"
> ---
>
> Key: HADOOP-14813
> URL: https://issues.apache.org/jira/browse/HADOOP-14813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
> Environment: Windows. username "Administrator"
>Reporter: Steve Loughran
>Priority: Minor
>
> Trying to build trunk as user "administrator" is failing in - 
> native-maven-plugin/hadoop common; command line too long. By the look of 
> things, its the number of artifacts from the maven repository which is 
> filling up the line; the CP really needs to go in a file instead, assuming 
> the maven plugin will let us.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15311:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.2
   Status: Resolved  (was: Patch Available)

Fixed the checkstyle warning.

+1 I committed this. Thanks, Erik

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15311:
---
Attachment: HADOOP-15311.002.patch

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397670#comment-16397670
 ] 

Steve Loughran commented on HADOOP-15209:
-

Thanks for testing, you are set up nicely for those scale tests

I do know where that typo comes from (look under resources/) , just chose to 
leave it alone in case someone's scripts were looking for it.

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14813) Windows build fails "command line too long"

2018-03-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397664#comment-16397664
 ] 

Steve Loughran commented on HADOOP-14813:
-

I made this go away by somehow telling maven that the repostiory was C:\\m2  ; 
think its discussed here 
https://maven.apache.org/guides/mini/guide-configuring-maven.html


That'ss inevitably just postponing things

> Windows build fails "command line too long"
> ---
>
> Key: HADOOP-14813
> URL: https://issues.apache.org/jira/browse/HADOOP-14813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
> Environment: Windows. username "Administrator"
>Reporter: Steve Loughran
>Priority: Minor
>
> Trying to build trunk as user "administrator" is failing in - 
> native-maven-plugin/hadoop common; command line too long. By the look of 
> things, its the number of artifacts from the maven repository which is 
> filling up the line; the CP really needs to go in a file instead, assuming 
> the maven plugin will let us.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397628#comment-16397628
 ] 

genericqa commented on HADOOP-15311:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 93 unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15311 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914334/HADOOP-15311.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6bc7e865f4c7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 45cccad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14303/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14303/testReport/ |
| Max. process+thread count | 1515 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14303/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-15310) s3a: option to disable s3a://landsat-pds/ tests

2018-03-13 Thread Vasu Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397620#comment-16397620
 ] 

Vasu Kulkarni commented on HADOOP-15310:


Thanks [~ste...@apache.org]

> s3a: option to disable s3a://landsat-pds/ tests
> ---
>
> Key: HADOOP-15310
> URL: https://issues.apache.org/jira/browse/HADOOP-15310
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vasu Kulkarni
>Priority: Major
>
> when testing with s3 like servers in our own lab, It is worth having an 
> option where we can disable the landsat-pds test, the default behavior is it 
> fails if the bucket is not available, a better option can be provide to 
> disable this test when testing with local s3a servers that dont have the 
> public bucket.
> 
>   fs.s3a.scale.test.csvfile
>   s3a://landsat-pds/scene_list.gz
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397590#comment-16397590
 ] 

genericqa commented on HADOOP-15311:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
37s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 35s{color} 
| {color:red} root generated 188 new + 1100 unchanged - 0 fixed = 1288 total 
(was 1100) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15311 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914330/HADOOP-15311.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3efe39e50d91 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 45cccad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14302/artifact/out/branch-compile-root.txt
 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14302/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14302/testReport/ |
| Max. process+thread count | 1352 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14302/console 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397559#comment-16397559
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

Half way through the patch ...
 # The handling of old/new KMS DT needs a little refactor. 
(KMSClientProvider#getKMSToken(), KMSClientProvider#addDelegationTokens()) One 
year from now I won't be able to remember why we did these tricks. I'll be 
easier to maintain in the future as well.
 # Having a configuration hadoop.security.kms.client.copy.legacy.token to flip 
the switch is fine. It'll need a better documentation, in the release note for 
example. I looked at the description in core-default.xml and honestly I 
wouldn't understand the caveats. It'll be hard to debug RM scalability problems 
with this key is on, and I doubt people will understand that once this is 
turned off, old clients will not be supported any more.
 # There will be cases where clients are on different versions. There will be 
cases where a client accesses multiple clusters (distcp). There will be cases 
where an application relies on multiple versions of Hadoop libs. It's going to 
difficult to control client version.
 # Would it make sense to mark KMSDelegationToken deprecated?

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397474#comment-16397474
 ] 

Erik Krogen commented on HADOOP-15311:
--

Mistake on my part, good catch. Thanks Chris! Attached v001 patch.

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-15311:
-
Attachment: HADOOP-15311.001.patch

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397465#comment-16397465
 ] 

Chris Douglas commented on HADOOP-15311:


Since this is overwriting the {{server}} field that's set using 
{{\@BeforeClass}}, does that interfere with other tests?

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-15311:
-
Status: Patch Available  (was: In Progress)

Attached v000 patch with simple change & unit test. [~chris.douglas], want to 
take a look as a prereq for HDFS-13265?

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-15311:
-
Attachment: HADOOP-15311.000.patch

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15311.000.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397451#comment-16397451
 ] 

Xiaoyu Yao commented on HADOOP-15234:
-

[~xiaochen] , I'm OK without unit test if it complicates the production code. 
+1 from me too. 

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch, HADOOP-15234.004.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397410#comment-16397410
 ] 

Íñigo Goiri commented on HADOOP-15308:
--

Thanks [~surmountian] for  [^HADOOP-15308.000.patch].
Let's see if the changes are OK for Linux too when Yetus comes back.
The change in lines 1917 and 1944 seem related to what I mentioned in 
HDFS-13268.
We should eventually fix those in the root: {{GenericTestUtils}}.

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-13 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15308:

Attachment: HADOOP-15308.000.patch

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15308.000.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397386#comment-16397386
 ] 

genericqa commented on HADOOP-15294:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15294 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914297/HADOOP-15294.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88b34a8e1342 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0355ec2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14301/testReport/ |
| Max. process+thread count | 1363 (vs. ulimit of 1) |
| modules | C: 

[jira] [Work started] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15311 started by Erik Krogen.

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14813) Windows build fails "command line too long"

2018-03-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397322#comment-16397322
 ] 

Íñigo Goiri commented on HADOOP-14813:
--

We still see this issue in Windows for trunk:
{code:java}
[INFO] --- native-maven-plugin:1.0-alpha-8:javah (default) @ hadoop-common ---
[INFO] cmd.exe /X /C "C:\PROGRA~1\Java\jdk1.8.0_162\bin\javah -d 
D:\hadoop-trunk\hadoop-common-project\hadoop-common\target\native\javah 
-classpath 

[jira] [Commented] (HADOOP-12767) update apache httpclient version to 4.5.2; httpcore to 4.4.4

2018-03-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397282#comment-16397282
 ] 

Kihwal Lee commented on HADOOP-12767:
-

[~shv], do you want to pull this in to 2.7 before the next release?  The 2015 
CVE isn't too bad, but there is an older one about MITM attack, which is more 
serious.

> update apache httpclient version to 4.5.2; httpcore to 4.4.4
> 
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Artem Aliev
>Assignee: Artem Aliev
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12767-branch-2-005.patch, 
> HADOOP-12767-branch-2.004.patch, HADOOP-12767-branch-2.005.patch, 
> HADOOP-12767.001.patch, HADOOP-12767.002.patch, HADOOP-12767.003.patch, 
> HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397271#comment-16397271
 ] 

Xiao Chen commented on HADOOP-15234:


>From discussion above [~shahrs87] and myself were okay with no test, because 
>adding test would unnecessarily complicate the existing code. [~xyao] please 
>comment if you feel strongly.

+1 from me pending Rushabh's comment above

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch, HADOOP-15234.004.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-03-13 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-15311:


 Summary: HttpServer2 needs a way to configure the 
acceptor/selector count
 Key: HADOOP-15311
 URL: https://issues.apache.org/jira/browse/HADOOP-15311
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Reporter: Erik Krogen
Assignee: Erik Krogen


HttpServer2 starts up with some number of acceptors and selectors, but only 
allows for the automatic configuration of these based off of the number of 
available cores:
{code:title=org.eclipse.jetty.server.ServerConnector}
selectors > 0 ? selectors : Math.max(1, Math.min(4, 
Runtime.getRuntime().availableProcessors() / 2)))
{code}
{code:title=org.eclipse.jetty.server.AbstractConnector}
if (acceptors < 0) {
  acceptors = Math.max(1, Math.min(4, cores / 8));
}
{code}
A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, so 
in addition to allowing for a higher tuning value under heavily loaded 
environments, adding configurability for this enables tuning these values down 
in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15294:
--
Status: Patch Available  (was: Open)

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397198#comment-16397198
 ] 

Takanobu Asanuma commented on HADOOP-15294:
---

Uploaded the 1st patch.

In HADOOP-15291, we added the validations that the Set of Principals is not 
empty. But it is not sufficient because there is a case that the Principals 
still remains after logout. The failed test, 
{{TestUGILoginFromKeytab#testReloginAfterFailedRelogin()}}, is that case. 
Therefore, we need to check the Kerberos Private Credentials rather than the 
Principals. It is more rigorous.

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-13 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15294:
--
Attachment: HADOOP-15294.1.patch

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397178#comment-16397178
 ] 

Daryn Sharp commented on HADOOP-14445:
--

I'll take a look.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-13 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397153#comment-16397153
 ] 

Ewan Higgs commented on HADOOP-15209:
-

I tried this out on a directory with 6995 files (a hadoop distribution binary 
release), writing to an S3A compatible storage, and it appears to work. The 
delete part was fairly quick and only logged deletes at the directory level, 
letting the FileSystem perform the delete for everything with the directory's 
prefix.

Then I cleaned out the source directory and ran distcp again - and it correctly 
elided all of the deletions but the top level:
{quote}{{Deleted from target: files: 0 directories: 1; skipped deletions 6995; 
deletions already missing 0; failed deletes 0}}{quote}
As an aside, I seem to be unable to find where the DistCp counters are 
formatted such that BANDWITH_IN_BYTES becomes "Bandwidth in Btyes":
{quote}{{    DistCp Counters
   }}
{{    Bandwidth in Btyes=189349   }}
{{    Bytes Copied=312048557  }}
{{    Bytes Expected=312048557    }}
{{    Files Copied=6155  }}
{{    DIR_COPY=841 }}{quote}
 

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397069#comment-16397069
 ] 

Wei-Chiu Chuang edited comment on HADOOP-14445 at 3/13/18 2:58 PM:
---

Thanks a lot [~xiaochen] [~shahrs87], [~daryn], [~yzhangal] and [~asuresh] 's 
valuable comments and patch coding. This is a tricky case and it impacts a 
number of our customers (as well as partner software). I haven't pay attention 
to the development of this Jira but just synced up with [~xiaochen] on the 
latest patch. The approach totally makes sense to me.

I am reviewing the latest patch and will post review comments later today(-ish).


was (Author: jojochuang):
Thanks a lot [~xiaochen] [~shahrs87], [~daryn], [~yzhangal] and [~asuresh] 's 
valuable comments and patch coding. This is a tricky case and it impacts a 
number of our customers (as well as partner software). I haven't pay attention 
to the development of this Jira but synced up with [~xiaochen] on the latest 
patch. Totally makes sense to me.

I am reviewing the latest patch and will post review comments later today(-ish).

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397069#comment-16397069
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

Thanks a lot [~xiaochen] [~shahrs87], [~daryn], [~yzhangal] and [~asuresh] 's 
valuable comments and patch coding. This is a tricky case and it impacts a 
number of our customers (as well as partner software). I haven't pay attention 
to the development of this Jira but synced up with [~xiaochen] on the latest 
patch. Totally makes sense to me.

I am reviewing the latest patch and will post review comments later today(-ish).

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15310) s3a: option to disable s3a://landsat-pds/ tests

2018-03-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396798#comment-16396798
 ] 

Steve Loughran commented on HADOOP-15310:
-

you can do this, as covered in the docs. 

https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/testing.html#Configuring_the_CSV_file_read_tests.2A.2A

> s3a: option to disable s3a://landsat-pds/ tests
> ---
>
> Key: HADOOP-15310
> URL: https://issues.apache.org/jira/browse/HADOOP-15310
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vasu Kulkarni
>Priority: Major
>
> when testing with s3 like servers in our own lab, It is worth having an 
> option where we can disable the landsat-pds test, the default behavior is it 
> fails if the bucket is not available, a better option can be provide to 
> disable this test when testing with local s3a servers that dont have the 
> public bucket.
> 
>   fs.s3a.scale.test.csvfile
>   s3a://landsat-pds/scene_list.gz
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15310) s3a: option to disable s3a://landsat-pds/ tests

2018-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15310.
-
Resolution: Not A Problem

> s3a: option to disable s3a://landsat-pds/ tests
> ---
>
> Key: HADOOP-15310
> URL: https://issues.apache.org/jira/browse/HADOOP-15310
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vasu Kulkarni
>Priority: Major
>
> when testing with s3 like servers in our own lab, It is worth having an 
> option where we can disable the landsat-pds test, the default behavior is it 
> fails if the bucket is not available, a better option can be provide to 
> disable this test when testing with local s3a servers that dont have the 
> public bucket.
> 
>   fs.s3a.scale.test.csvfile
>   s3a://landsat-pds/scene_list.gz
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2018-03-13 Thread caixiaofeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396727#comment-16396727
 ] 

caixiaofeng commented on HADOOP-9969:
-

and the code in 2.7.2 is the same as already add the patch

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
>Priority: Major
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2018-03-13 Thread caixiaofeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396722#comment-16396722
 ] 

caixiaofeng commented on HADOOP-9969:
-

any update?  meet this with ibmjdk-1.7.0 SR4   hadoop2.7.2

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
>Priority: Major
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14999) AliyunOSS: provide one asynchronous multi-part based uploading mechanism

2018-03-13 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396605#comment-16396605
 ] 

SammiChen commented on HADOOP-14999:


Hi [~uncleGen], some comments, 

1. 
{quote}
Preconditions.checkArgument(v >= min,
String.format("Value of %s: %d is below the minimum value %d",
key, v, min));
{quote}

Ignore the comment. String.format with %d is OK. 

2. 
bq.  Asynchronous multi-part based uploading mechanism to support huge file* 
which is larger than 5GB.

Please give a detail explain about where does 5GB threshold comes from?

3. 
{quote}
 if (partSize < MULTIPART_MIN_SIZE) {
  LOG.warn("{} must be at least 5 MB; configured value is {}",
  property, partSize);
  partSize = MULTIPART_MIN_SIZE;
{quote}

MULTIPART_MIN_SIZE is 100K. The threshold in warning message is 5 MB.

4. 
bq.  long partSize = AliyunOSSUtils.getMultipartSizeProperty(getConf(), 
MULTIPART_UPLOAD_PART_SIZE_DEFAULT);

 can we use uploadPartSize instead here?

5.  
   bq.  I also add the resource clean logic in try-finally
 {quote}
 try {
  blockStream.write(b, off, len);
  blockWritten += len;
  if (blockWritten >= blockSize) {
uploadCurrentPart();
blockWritten = 0L;
  }
} finally {
  for (File tFile: blockFiles) {
if (tFile.exists() && !tFile.delete()) {
  LOG.warn("Failed to delete temporary file {}", tFile);
}
  }
}
   {quote}
  I see you add the temp file delete in finally no matter the above operation 
succeeds or not .  When store.uploadPart() returns, is the upload finished 
already?  If it's 
  an async operation, delete the temp file in normal case may have trouble. 

6. the performance data looks good. 
  




> AliyunOSS: provide one asynchronous multi-part based uploading mechanism
> 
>
> Key: HADOOP-14999
> URL: https://issues.apache.org/jira/browse/HADOOP-14999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Major
> Attachments: HADOOP-14999.001.patch, HADOOP-14999.002.patch, 
> HADOOP-14999.003.patch, HADOOP-14999.004.patch, HADOOP-14999.005.patch, 
> HADOOP-14999.006.patch, HADOOP-14999.007.patch, HADOOP-14999.008.patch, 
> HADOOP-14999.009.patch, asynchronous_file_uploading.pdf, 
> diff-between-patch7-and-patch8.txt
>
>
> This mechanism is designed for uploading file in parallel and asynchronously:
>  - improve the performance of uploading file to OSS server. Firstly, this 
> mechanism splits result to multiple small blocks and upload them in parallel. 
> Then, getting result and uploading blocks are asynchronous.
>  - avoid buffering too large result into local disk. To cite an extreme 
> example, there is a task which will output 100GB or even larger, we may need 
> to output this 100GB to local disk and then upload it. Sometimes, it is 
> inefficient and limited to disk space.
> This patch reuse {{SemaphoredDelegatingExecutor}} as executor service and 
> depends on HADOOP-15039.
> Attached {{asynchronous_file_uploading.pdf}} illustrated the difference 
> between previous {{AliyunOSSOutputStream}} and 
> {{AliyunOSSBlockOutputStream}}, i.e. this asynchronous multi-part based 
> uploading mechanism.
> 1. {{AliyunOSSOutputStream}}: we need to output the whole result to local 
> disk before we can upload it to OSS. This will poses two problems:
>  - if the output file is too large, it will run out of the local disk.
>  - if the output file is too large, task will wait long time to upload result 
> to OSS before finish, wasting much compute resource.
> 2. {{AliyunOSSBlockOutputStream}}: we cut the task output into small blocks, 
> i.e. some small local file, and each block will be packaged into a uploading 
> task. These tasks will be submitted into {{SemaphoredDelegatingExecutor}}. 
> {{SemaphoredDelegatingExecutor}} will upload this blocks in parallel, this 
> will improve performance greatly.
> 3. Each task will retry 3 times to upload block to Aliyun OSS. If one of 
> those tasks failed, the whole file uploading will failed, and we will abort 
> current uploading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org