[jira] [Comment Edited] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584680#comment-16584680
 ] 

Szilard Nemeth edited comment on HADOOP-15674 at 8/18/18 6:52 AM:
--

Thanks [~xiaochen] for the commits!
Is there any step that I need to take? Given that you mentioned there was a 
conflict with branch-2.8


was (Author: snemeth):
Thanks [~xiaochen] for the commits!
Is there any steps that I need to take? Given that you mentioned there was a 
conflict with branch-2.8

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 2.8.5, 2.7.8, 3.0.4, 3.2, 3.1.2
>
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584680#comment-16584680
 ] 

Szilard Nemeth commented on HADOOP-15674:
-

Thanks [~xiaochen] for the commits!
Is there any steps that I need to take? Given that you mentioned there was a 
conflict with branch-2.8

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 2.8.5, 2.7.8, 3.0.4, 3.2, 3.1.2
>
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584618#comment-16584618
 ] 

Thomas Marquardt commented on HADOOP-15679:
---

+1 LGTM

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15661) ABFS: Add support for ACL

2018-08-17 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584606#comment-16584606
 ] 

Da Zhou edited comment on HADOOP-15661 at 8/18/18 3:52 AM:
---

Submitting patch : HADOOP-15661-HADOOP-15407-002.patch
 Ran tests after applied patch HADOOP-15660-HADOOP-15407-002.patch:

Below are the tests results (WASB tests were skipped):

Test using *Oauth*, namespace enabled account.
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 468
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 186
 [INFO] BUILD SUCCESS

Test using *SharedKey*:
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 641
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 172
 [INFO] BUILD SUCCESS

 


was (Author: danielzhou):
Submitting patch : HADOOP-15661-HADOOP-15407-002.patch
 Ran tests after applied patch HADOOP-15660-HADOOP-15407-002.patch:

Below are the tests results:

Test using *Oauth*, namespace enabled account.
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 468
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 186
 [INFO] BUILD SUCCESS

Test using *SharedKey*:
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 641
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 172
 [INFO] BUILD SUCCESS

 

> ABFS: Add support for ACL
> -
>
> Key: HADOOP-15661
> URL: https://issues.apache.org/jira/browse/HADOOP-15661
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15661-HADOOP-15407-001.patch, 
> HADOOP-15661-HADOOP-15407-002.patch
>
>
> - Add support for ACL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15661) ABFS: Add support for ACL

2018-08-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15661:
-
Attachment: HADOOP-15661-HADOOP-15407-002.patch

> ABFS: Add support for ACL
> -
>
> Key: HADOOP-15661
> URL: https://issues.apache.org/jira/browse/HADOOP-15661
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15661-HADOOP-15407-001.patch, 
> HADOOP-15661-HADOOP-15407-002.patch
>
>
> - Add support for ACL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15661) ABFS: Add support for ACL

2018-08-17 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584606#comment-16584606
 ] 

Da Zhou commented on HADOOP-15661:
--

Submitting patch : HADOOP-15661-HADOOP-15407-002.patch
 Ran tests after applied patch HADOOP-15660-HADOOP-15407-002.patch:

Below are the tests results:

Test using *Oauth*, namespace enabled account.
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 468
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 186
 [INFO] BUILD SUCCESS

Test using *SharedKey*:
 [INFO] — maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure —
 [WARNING] Tests run: 265, Failures: 0, Errors: 0, Skipped: 76
 [INFO] — maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure —
 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(default-integration-test) @ hadoop-azure —
 [WARNING] Tests run: 861, Failures: 0, Errors: 0, Skipped: 641
 [INFO] — maven-failsafe-plugin:2.21.0:integration-test 
(sequential-integration-tests) @ hadoop-azure —
 [WARNING] Tests run: 186, Failures: 0, Errors: 0, Skipped: 172
 [INFO] BUILD SUCCESS

 

> ABFS: Add support for ACL
> -
>
> Key: HADOOP-15661
> URL: https://issues.apache.org/jira/browse/HADOOP-15661
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15661-HADOOP-15407-001.patch
>
>
> - Add support for ACL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584597#comment-16584597
 ] 

genericqa commented on HADOOP-10219:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-10219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936118/HADOOP-10219.v4.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a260d3c64a89 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 79c97f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15059/testReport/ |
| Max. process+thread count | 1669 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15059/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> -

[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584593#comment-16584593
 ] 

genericqa commented on HADOOP-15679:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 112 unchanged - 1 fixed = 112 total (was 113) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15679 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936112/HADOOP-15679-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 548038674edd 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 79c97f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15058/testReport/ |
| Max. process+thread count | 1587 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15058/console |
| Power

[jira] [Assigned] (HADOOP-15611) Log more details for FairCallQueue

2018-08-17 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HADOOP-15611:
--

Assignee: Ryan Wu

> Log more details for FairCallQueue
> --
>
> Key: HADOOP-15611
> URL: https://issues.apache.org/jira/browse/HADOOP-15611
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15611.001.patch, HADOOP-15611.002.patch, 
> HADOOP-15611.003.patch, HADOOP-15611.004.patch
>
>
> In the usage of the FairCallQueue, we find there missing some Key log. Only a 
> few logs are printed, it makes us hard to learn and debug this feature.
> At least, following places can print more logs.
> * DecayRpcScheduler#decayCurrentCounts
> * WeightedRoundRobinMultiplexer#moveToNextQueue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-10219:

Attachment: HADOOP-10219.v4.patch

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch, HADOOP-10219.v4.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584560#comment-16584560
 ] 

Lukas Majercak commented on HADOOP-10219:
-

Thanks for the feedback, changed to AtomicReference.

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch, HADOOP-10219.v4.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15684) triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException happens.

2018-08-17 Thread Rong Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584558#comment-16584558
 ] 

Rong Tang commented on HADOOP-15684:


[~jojochuang]    No, it is not 2.7.X and 2.9.X, thanks for pointing it out.  
our private branches manually merged these MultipleNameNode related changes.

I also checked the truck, it still has this issue. Please help confirm it. 
thanks.

BTW, the warning log is strange when remote nn is standby.

// it is a standby exception, so we try the other NN
 LOG.warn("Failed to reach remote node: " + currentNN
 + ", retrying with remaining remote NNs");

 

 

 

 

> triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException 
> happens. 
> 
>
> Key: HADOOP-15684
> URL: https://issues.apache.org/jira/browse/HADOOP-15684
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Rong Tang
>Priority: Critical
> Attachments: FixRollEditLog.patch, 
> hadoop--rollingUpgrade-BN2SCH070021402.log
>
>
> When name node call triggerActiveLogRoll, and the cachedActiveProxy is a dead 
> name node, it will throws a ConnectTimeoutException, expected behavior is to 
> try next NN, but current logic doesn't do so, instead, it keeps trying the 
> dead, mistakenly take it as active.
>  
> 2018-08-17 10:02:12,001 WARN [Edit log tailer] 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a 
> roll of the active NN
> org.apache.hadoop.net.ConnectTimeoutException: Call From 
> BN2SCH070021402/25.126.188.193 to BN2SCH070041016.ap.gbl:8020 failed on 
> socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 
> 2 millis timeout 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
>  
> C:\Users\rotang>ping BN2SCH070041016
> Pinging BN2SCH070041016 [25.126.141.79] with 32 bytes of data:
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
>  
> Attachment is a log file saying how it repeatedly retries a dead name node, 
> and a fix patch.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584557#comment-16584557
 ] 

Steve Loughran commented on HADOOP-10219:
-

Might be better to use an {{AtomicReference< Thread >}} and get/set that: one 
field to manage, less convoluted, and less likely to confuse findbugs

regarding the main IPC patch: someone who knows their way round that code is 
going to have to review it. I'm not that person, I'm afraid

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15684) triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException happens.

2018-08-17 Thread Rong Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Tang updated HADOOP-15684:
---
Affects Version/s: (was: 2.9.1)
   (was: 2.7.5)

> triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException 
> happens. 
> 
>
> Key: HADOOP-15684
> URL: https://issues.apache.org/jira/browse/HADOOP-15684
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Rong Tang
>Priority: Critical
> Attachments: FixRollEditLog.patch, 
> hadoop--rollingUpgrade-BN2SCH070021402.log
>
>
> When name node call triggerActiveLogRoll, and the cachedActiveProxy is a dead 
> name node, it will throws a ConnectTimeoutException, expected behavior is to 
> try next NN, but current logic doesn't do so, instead, it keeps trying the 
> dead, mistakenly take it as active.
>  
> 2018-08-17 10:02:12,001 WARN [Edit log tailer] 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a 
> roll of the active NN
> org.apache.hadoop.net.ConnectTimeoutException: Call From 
> BN2SCH070021402/25.126.188.193 to BN2SCH070041016.ap.gbl:8020 failed on 
> socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 
> 2 millis timeout 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
>  
> C:\Users\rotang>ping BN2SCH070041016
> Pinging BN2SCH070041016 [25.126.141.79] with 32 bytes of data:
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
>  
> Attachment is a log file saying how it repeatedly retries a dead name node, 
> and a fix patch.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15684) triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException happens.

2018-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584528#comment-16584528
 ] 

Wei-Chiu Chuang commented on HADOOP-15684:
--

Does it actually affect Hadoop 2.75. and 2.9.1? The MultipleNameNode was added 
in HDFS-6440 and is not in Hadoop 2.x

> triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException 
> happens. 
> 
>
> Key: HADOOP-15684
> URL: https://issues.apache.org/jira/browse/HADOOP-15684
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.7.5, 2.9.1
>Reporter: Rong Tang
>Priority: Critical
> Attachments: FixRollEditLog.patch, 
> hadoop--rollingUpgrade-BN2SCH070021402.log
>
>
> When name node call triggerActiveLogRoll, and the cachedActiveProxy is a dead 
> name node, it will throws a ConnectTimeoutException, expected behavior is to 
> try next NN, but current logic doesn't do so, instead, it keeps trying the 
> dead, mistakenly take it as active.
>  
> 2018-08-17 10:02:12,001 WARN [Edit log tailer] 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a 
> roll of the active NN
> org.apache.hadoop.net.ConnectTimeoutException: Call From 
> BN2SCH070021402/25.126.188.193 to BN2SCH070041016.ap.gbl:8020 failed on 
> socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 
> 2 millis timeout 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
>  
> C:\Users\rotang>ping BN2SCH070041016
> Pinging BN2SCH070041016 [25.126.141.79] with 32 bytes of data:
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
>  
> Attachment is a log file saying how it repeatedly retries a dead name node, 
> and a fix patch.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15684) triggerActiveLogRoll stuck on dead name node, when ConnectTimeoutException happens.

2018-08-17 Thread Rong Tang (JIRA)
Rong Tang created HADOOP-15684:
--

 Summary: triggerActiveLogRoll stuck on dead name node, when 
ConnectTimeoutException happens. 
 Key: HADOOP-15684
 URL: https://issues.apache.org/jira/browse/HADOOP-15684
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.9.1, 2.7.5
Reporter: Rong Tang
 Attachments: FixRollEditLog.patch, 
hadoop--rollingUpgrade-BN2SCH070021402.log

When name node call triggerActiveLogRoll, and the cachedActiveProxy is a dead 
name node, it will throws a ConnectTimeoutException, expected behavior is to 
try next NN, but current logic doesn't do so, instead, it keeps trying the 
dead, mistakenly take it as active.

 

2018-08-17 10:02:12,001 WARN [Edit log tailer] 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a 
roll of the active NN

org.apache.hadoop.net.ConnectTimeoutException: Call From 
BN2SCH070021402/25.126.188.193 to BN2SCH070041016.ap.gbl:8020 failed on socket 
timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 2 millis 
timeout 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)

 

C:\Users\rotang>ping BN2SCH070041016

Pinging BN2SCH070041016 [25.126.141.79] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.

 

Attachment is a log file saying how it repeatedly retries a dead name node, and 
a fix patch.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584519#comment-16584519
 ] 

Steve Loughran commented on HADOOP-15679:
-

One change not in this patch: logging if a too low sleep time is passed in. I 
couldn't see how to stop this polluting all the logs in an application without 
adding counters of "has this message been printed yet", etc, and all the 
complexity that brings

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Status: Patch Available  (was: Open)

patch 003; address comments and checkstyle

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Attachment: HADOOP-15679-003.patch

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584515#comment-16584515
 ] 

Steve Loughran commented on HADOOP-15679:
-

[~xyao]: I'm adding a log at the end of the run; not doing the per-hook details 
as it would be more complicated. With log4j set to debug and print threads, the 
log of the test run is

{code}
2018-08-17 16:54:44,969 [main] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:shutdownHookManager(121)) - invoking 
executeShutdown()
2018-08-17 16:54:44,978 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(257)) - Starting shutdown of hook4 with sleep 
time of 25000
2018-08-17 16:54:46,980 [main] WARN  util.ShutdownHookManager 
(ShutdownHookManager.java:executeShutdown(128)) - ShutdownHook 'Hook' timeout, 
java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at 
org.apache.hadoop.util.TestShutdownHookManager.shutdownHookManager(TestShutdownHookManager.java:122)
  ...
2018-08-17 16:54:46,980 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(268)) - Shutdown hook4 interrupted exception
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.util.TestShutdownHookManager$Hook.run(TestShutdownHookManager.java:260)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-08-17 16:54:46,984 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(257)) - Starting shutdown of hook3 with sleep 
time of 1000
2018-08-17 16:54:47,985 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(262)) - Completed shutdown of hook3
2018-08-17 16:54:47,986 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(257)) - Starting shutdown of hook2 with sleep 
time of 0
2018-08-17 16:54:47,986 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(262)) - Completed shutdown of hook2
2018-08-17 16:54:47,987 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(257)) - Starting shutdown of hook1 with sleep 
time of 0
2018-08-17 16:54:47,987 [shutdown-hook-0] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:run(262)) - Completed shutdown of hook1
2018-08-17 16:54:47,987 [main] INFO  util.TestShutdownHookManager 
(TestShutdownHookManager.java:shutdownHookManager(123)) - Shutdown completed

// and here, in the real shutdown hook of the process.
2018-08-17 16:54:47,994 [Thread-0] DEBUG util.ShutdownHookManager 
(ShutdownHookManager.java:run(97)) - Completed shutdown in 0.000 seconds; 
Timeouts: 0
2018-08-17 16:54:47,997 [Thread-0] DEBUG util.ShutdownHookManager 
(ShutdownHookManager.java:shutdownExecutor(154)) - ShutdownHookManger completed 
shutdown.

{code}

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584508#comment-16584508
 ] 

Steve Loughran commented on HADOOP-15679:
-

[~tmarquardt], thanks for the comments

h3. imports

Nominally alphabetical within groups, the groups being

java.*
java.* 

(others)

org.apache.*

static import*

It's a bit lax and generally nobody tries to reorder stuff once it's in as it 
only makes cherrypicking and merging harder. Imports are always a 
false-positive merge conflict point. As an example, here all the java.* are 
lower down, but I'm not going to move them.

h3. reentrant shutdown

I triggered this when writing my tests. as the atomic boolean was already being 
set, I decided to follow through with some actions. Removing the logging of the 
stack, given your premise that it'll be fairly meaningless.

h3. FileSystem.closeAll()

a faster shutdown would be good, especially when there's one FS instance per 
container, but when it's triggered during JVM shutdown there's a risk that you 
can't create new threads. So the shutdown executor pool needs to be created in 
advance.

I think we do need to do more here...there's a collection of JIRAs related to 
handling problems in shutdown. The timeout one handled one issue 
(deadlock/retry problems in a shutdown hook breaking failover), but turns out 
to be over-aggressive for some apps, especially when the final committing of 
work is slow. 



> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Attachment: (was: HADOOP-15679-003.patch)

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Attachment: HADOOP-15679-003.patch

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2018-08-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Status: Open  (was: Patch Available)

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584485#comment-16584485
 ] 

Lukas Majercak commented on HADOOP-10219:
-

the findbugs seems to be spurious at this point?

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584477#comment-16584477
 ] 

genericqa commented on HADOOP-10219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ipc.Client$Connection.connectingThread; locked 57% of time  
Unsynchronized access at Client.java:57% of time  Unsynchronized access at 
Client.java:[line 1231] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-10219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936086/HADOOP-10219.v3.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e75a8bb02bab 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ab37423 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15057/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1

[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584443#comment-16584443
 ] 

Hudson commented on HADOOP-14624:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14799 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14799/])
HADOOP-14624. Add GenericTestUtils.DelayAnswer that accept slf4j logger 
(gifuma: rev 79c97f6a0bebc95ff81a8ef9b07d3619f05ed583)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestIPCLoggerChannel.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyCheckpoints.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch, HADOOP-14624.017.patch, 
> HADOOP-14624.018.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15660) ABFS: Add support for OAuth

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584440#comment-16584440
 ] 

genericqa commented on HADOOP-15660:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
51s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 
40 new + 6 unchanged - 2 fixed = 46 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-tools/hadoop-azure generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  Redundant nullcheck of 
org.apache.hadoop.fs.azurebfs.oauth2.AzureADToken.getExpiry(), which is known 
to be non-null in 
org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider.isTokenAboutToExpire() 
 Redundant null check at AccessTokenProvider.java:is known to be non-null in 
org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider.isTokenAboutToExpire() 
 Redundant null check at AccessTokenProvider.java:[line 80] |
|  |  Nullcheck of clientId at line 128 of value previously dereferenced in 
org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenFromMsi(String,
 String, boolean)  At AzureADAuthenticator.java:128 of value previously 
dereferenced in 
org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenFromMsi(String,
 String, boolean)  At AzureADAuthenticator.java:[line 114] |
|  |  Nullcheck of tenantGuid at line 123 of value previously dereferenced i

[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584431#comment-16584431
 ] 

Anbang Hu commented on HADOOP-10219:


Thanks [~lukmajercak] for patch  [^HADOOP-10219.v3.patch]. +1 on the patch.

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-08-17 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-14624:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch, HADOOP-14624.017.patch, 
> HADOOP-14624.018.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-08-17 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-14624:
--
Fix Version/s: 3.2.0

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch, HADOOP-14624.017.patch, 
> HADOOP-14624.018.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-08-17 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584413#comment-16584413
 ] 

Giovanni Matteo Fumarola commented on HADOOP-14624:
---

Committed to trunk.
Thanks [~iapicker] and [~vincent he] for working on this.

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch, HADOOP-14624.017.patch, 
> HADOOP-14624.018.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584382#comment-16584382
 ] 

Lukas Majercak commented on HADOOP-10219:
-

Added an access lock for the connectingThread in v3.patch to get rid of the 
findbugs warning.

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-10219:

Attachment: HADOOP-10219.v3.patch

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch, HADOOP-10219.v3.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15660) ABFS: Add support for OAuth

2018-08-17 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15660:
-
Attachment: HADOOP-15660-HADOOP-15407-002.patch

> ABFS: Add support for OAuth
> ---
>
> Key: HADOOP-15660
> URL: https://issues.apache.org/jira/browse/HADOOP-15660
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15660-HADOOP-15407-001.patch, 
> HADOOP-15660-HADOOP-15407-002.patch
>
>
> - Add support for OAuth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15660) ABFS: Add support for OAuth

2018-08-17 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584380#comment-16584380
 ] 

Da Zhou commented on HADOOP-15660:
--

Adding patch HADOOP-15660-HADOOP-15407-002.patch :
 - Clean the code according to above feedback
 - Fixed findbugs issue.
 - Added new unit tests for QueryParams
 - Added Oauth Test for Blob Data Contributor client and Blob Data Reader 
client.
 - Updated ListResultEntrySchema for Oauth enbaled account.
 - Added oauth configurations properties in "azure-bfs-test.xml":
 User can use SharedKey to run test in non-secure mode, or user can provide 
Oauth related properties to run tests in secure mode.
 Notice that in the current preview, the account with Oauth enabled won't have 
back compatibility with WASB, so the compatibility tests will be skipped when 
Oauth is enabled.

> ABFS: Add support for OAuth
> ---
>
> Key: HADOOP-15660
> URL: https://issues.apache.org/jira/browse/HADOOP-15660
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15660-HADOOP-15407-001.patch
>
>
> - Add support for OAuth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584371#comment-16584371
 ] 

genericqa commented on HADOOP-10219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ipc.Client$Connection.connectingThread; locked 57% of time  
Unsynchronized access at Client.java:57% of time  Unsynchronized access at 
Client.java:[line 1224] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-10219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936060/HADOOP-10219.v2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 74f92d8eab3c 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ab37423 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15055/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1

[jira] [Commented] (HADOOP-9214) Create a new touch command to allow modifying atime and mtime

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584270#comment-16584270
 ] 

Hudson commented on HADOOP-9214:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14797 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14797/])
HADOOP-9214. Create a new touch command to allow modifying atime and (xiao: rev 
60ffec9f7921a50aff20434c1042b16fa59240f7)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsCommand.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/TouchCommands.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Touch.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellTouch.java


> Create a new touch command to allow modifying atime and mtime
> -
>
> Key: HADOOP-9214
> URL: https://issues.apache.org/jira/browse/HADOOP-9214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.5
>Reporter: Brian Burton
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-9214-001.patch, HADOOP-9214-002.patch, 
> HADOOP-9214-003.patch, HADOOP-9214-004.patch, HADOOP-9214-005.patch, 
> HADOOP-9214-006.patch
>
>
> Currently there is no way to set the mtime or atime of a file from the 
> "hadoop fs" command line. It would be useful if the 'hadoop fs -touchz' 
> command were updated to include this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-10219:

Description: 
When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
client
 is blocked spinning due to a connectivity problem, it does not exit until the 
policy has timed out -so the stopClient() operation can hang for an extended 
period of time.

This can surface in the shutdown hook of FileSystem.cache.closeAll()

Also, Client.stop() is for used in NN switch from Standby to Active, and can 
therefore have very bad consequences and cause downtime.

  was:
When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
client
is blocked spinning due to a connectivity problem, it does not exit until the 
policy has timed out -so the stopClient() operation can hang for an extended 
period of time.

This can surface in the shutdown hook of FileSystem.cache.closeAll()


> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
>  is blocked spinning due to a connectivity problem, it does not exit until 
> the policy has timed out -so the stopClient() operation can hang for an 
> extended period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()
> Also, Client.stop() is for used in NN switch from Standby to Active, and can 
> therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15675) checkstyle suppressions files are cached

2018-08-17 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584268#comment-16584268
 ] 

Allen Wittenauer commented on HADOOP-15675:
---

While working with YETUS-660, it's become evident that the strategy of bundling 
the checkstyle suppressions file into the hadoop-build-tools jar makes it 
extremely likely that checkstyle changes will get missed during a build.  I 
believe I have a way to mitigate it during precommit, but as it stands, 
individual users are *very likely* to have problems.

> checkstyle suppressions files are cached
> 
>
> Key: HADOOP-15675
> URL: https://issues.apache.org/jira/browse/HADOOP-15675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kitti Nanasi
>Priority: Minor
>
> If a patch is created with checkstyle errors, for example when a modified 
> line is longer than 80 characters, then running checkstyle with the 
> test-patch script runs to success (though it should fail and show an error 
> about the long line).
> {code:java}
> dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
> However it does show the error (so works correctly) when running it with the 
> IDEA checkstyle plugin.
>  
> I only tried it out it for patches with too long lines and wrong indentation, 
> but I assume that it can be a more general problem.
> We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
> line" checkstyle error. In the first build for that patch, the checkstyle 
> report was showing the error, but when it was ran again with the same patch, 
> the error disappeared. So probably the checkstyle checking stopped working on 
> trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15675) checkstyle suppressions files are cached

2018-08-17 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-15675:
--
Summary: checkstyle suppressions files are cached  (was: checkstyle fails 
to execute)

> checkstyle suppressions files are cached
> 
>
> Key: HADOOP-15675
> URL: https://issues.apache.org/jira/browse/HADOOP-15675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kitti Nanasi
>Priority: Minor
>
> If a patch is created with checkstyle errors, for example when a modified 
> line is longer than 80 characters, then running checkstyle with the 
> test-patch script runs to success (though it should fail and show an error 
> about the long line).
> {code:java}
> dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
> However it does show the error (so works correctly) when running it with the 
> IDEA checkstyle plugin.
>  
> I only tried it out it for patches with too long lines and wrong indentation, 
> but I assume that it can be a more general problem.
> We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
> line" checkstyle error. In the first build for that patch, the checkstyle 
> report was showing the error, but when it was ran again with the same patch, 
> the error disappeared. So probably the checkstyle checking stopped working on 
> trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15683) Client.setupConnection should not block Client.stop() calls

2018-08-17 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-15683:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Client.setupConnection should not block Client.stop() calls
> ---
>
> Key: HADOOP-15683
> URL: https://issues.apache.org/jira/browse/HADOOP-15683
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HADOOP-15683.000.patch
>
>
> If the IPC Client is still setting up connections when Client.stop() is 
> called, the stop() call will not succeed until setupConnection finishes 
> (successfully or with failure). 
> This can cause very long delay (maxFailures * timeout can be more than 
> 10minutes depending on configuration) in stopping the client. 
> Client.stop() is for example used in NN switch from Standby to Active, and 
> can therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15683) Client.setupConnection should not block Client.stop() calls

2018-08-17 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584262#comment-16584262
 ] 

Lukas Majercak commented on HADOOP-15683:
-

Seems like it yeah, moving to HADOOP-10219, closing this one as duplicate.

> Client.setupConnection should not block Client.stop() calls
> ---
>
> Key: HADOOP-15683
> URL: https://issues.apache.org/jira/browse/HADOOP-15683
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HADOOP-15683.000.patch
>
>
> If the IPC Client is still setting up connections when Client.stop() is 
> called, the stop() call will not succeed until setupConnection finishes 
> (successfully or with failure). 
> This can cause very long delay (maxFailures * timeout can be more than 
> 10minutes depending on configuration) in stopping the client. 
> Client.stop() is for example used in NN switch from Standby to Active, and 
> can therefore have very bad consequences and cause downtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584257#comment-16584257
 ] 

Lukas Majercak commented on HADOOP-10219:
-

Hi [~ste...@apache.org]. Thanks for showing me this JIRA. I added v2 patch, 
porting the unit test from HADOOP-15683 and left the fix from this one. This 
LGTM, thoughts?

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
> is blocked spinning due to a connectivity problem, it does not exit until the 
> policy has timed out -so the stopClient() operation can hang for an extended 
> period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-10219:

Attachment: HADOOP-10219.v2.patch

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-10219.patch, HADOOP-10219.v1.patch, 
> HADOOP-10219.v2.patch
>
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
> is blocked spinning due to a connectivity problem, it does not exit until the 
> policy has timed out -so the stopClient() operation can hang for an extended 
> period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9214) Create a new touch command to allow modifying atime and mtime

2018-08-17 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-9214:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thank you for the contribution, [~hgadre]!

> Create a new touch command to allow modifying atime and mtime
> -
>
> Key: HADOOP-9214
> URL: https://issues.apache.org/jira/browse/HADOOP-9214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.5
>Reporter: Brian Burton
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-9214-001.patch, HADOOP-9214-002.patch, 
> HADOOP-9214-003.patch, HADOOP-9214-004.patch, HADOOP-9214-005.patch, 
> HADOOP-9214-006.patch
>
>
> Currently there is no way to set the mtime or atime of a file from the 
> "hadoop fs" command line. It would be useful if the 'hadoop fs -touchz' 
> command were updated to include this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14154) Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584229#comment-16584229
 ] 

Hudson commented on HADOOP-14154:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14796 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14796/])
HADOOP-14154 Persist isAuthoritative bit in DynamoDBMetaStore (fabbri: rev 
d7232857d8d1e10cdac171acdc931187e45fd6be)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DDBPathMetadata.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathMetadataDynamoDBTranslation.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java


> Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)
> -
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Rajesh Balamohan
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch, HADOOP-14154-spec-001.pdf, 
> HADOOP-14154-spec-002.pdf, HADOOP-14154.001.patch, HADOOP-14154.002.patch, 
> HADOOP-14154.003.patch, HADOOP-14154.004.patch, HADOOP-14154.005.patch, 
> HADOOP-14154.006.patch, HADOOP-14154.007.patch, all-logs.txt, 
> perf-eval-v1.diff, run-dir-perf-itest-v2.sh, run-dir-perf-itest.sh
>
>
> Add support for "authoritative mode" for DynamoDBMetadataStore.
> The missing feature is to persist the bit set in 
> {{DirListingMetadata.isAuthoritative}}. 
> This topic has been super confusing for folks so I will also file a 
> documentation Jira to explain the design better.
> We may want to also rename the DirListingMetadata.isAuthoritative field to 
> .isFullListing to eliminate the multiple uses and meanings of the word 
> "authoritative".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9214) Create a new touch command to allow modifying atime and mtime

2018-08-17 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-9214:
--
Summary: Create a new touch command to allow modifying atime and mtime  
(was: Update touchz to allow modifying atime and mtime)

> Create a new touch command to allow modifying atime and mtime
> -
>
> Key: HADOOP-9214
> URL: https://issues.apache.org/jira/browse/HADOOP-9214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.5
>Reporter: Brian Burton
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HADOOP-9214-001.patch, HADOOP-9214-002.patch, 
> HADOOP-9214-003.patch, HADOOP-9214-004.patch, HADOOP-9214-005.patch, 
> HADOOP-9214-006.patch
>
>
> Currently there is no way to set the mtime or atime of a file from the 
> "hadoop fs" command line. It would be useful if the 'hadoop fs -touchz' 
> command were updated to include this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9214) Update touchz to allow modifying atime and mtime

2018-08-17 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584209#comment-16584209
 ] 

Xiao Chen commented on HADOOP-9214:
---

+1, committing this.

Thanks for the great work here Hrishikesh, and thanks Daryn for the comment!

> Update touchz to allow modifying atime and mtime
> 
>
> Key: HADOOP-9214
> URL: https://issues.apache.org/jira/browse/HADOOP-9214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.5
>Reporter: Brian Burton
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HADOOP-9214-001.patch, HADOOP-9214-002.patch, 
> HADOOP-9214-003.patch, HADOOP-9214-004.patch, HADOOP-9214-005.patch, 
> HADOOP-9214-006.patch
>
>
> Currently there is no way to set the mtime or atime of a file from the 
> "hadoop fs" command line. It would be useful if the 'hadoop fs -touchz' 
> command were updated to include this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584199#comment-16584199
 ] 

Hudson commented on HADOOP-15674:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14795 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14795/])
HADOOP-15674. Test failure TestSSLHttpServer.testExcludedCiphers with (xiao: 
rev 8d7c93186e3090b19aa59006bb6b32ba929bd8e6)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java


> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 2.8.5, 2.7.8, 3.0.4, 3.2, 3.1.2
>
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14154) Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)

2018-08-17 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-14154:
--
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Good work on the patch [~gabor.bota] and also on figuring 
out the performance issue. Thank you!

> Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)
> -
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Rajesh Balamohan
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch, HADOOP-14154-spec-001.pdf, 
> HADOOP-14154-spec-002.pdf, HADOOP-14154.001.patch, HADOOP-14154.002.patch, 
> HADOOP-14154.003.patch, HADOOP-14154.004.patch, HADOOP-14154.005.patch, 
> HADOOP-14154.006.patch, HADOOP-14154.007.patch, all-logs.txt, 
> perf-eval-v1.diff, run-dir-perf-itest-v2.sh, run-dir-perf-itest.sh
>
>
> Add support for "authoritative mode" for DynamoDBMetadataStore.
> The missing feature is to persist the bit set in 
> {{DirListingMetadata.isAuthoritative}}. 
> This topic has been super confusing for folks so I will also file a 
> documentation Jira to explain the design better.
> We may want to also rename the DirListingMetadata.isAuthoritative field to 
> .isFullListing to eliminate the multiple uses and meanings of the word 
> "authoritative".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15674:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.2
   3.2
   3.0.4
   2.7.8
   2.8.5
   2.9.2
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3[0-1], branch-2, branch-2.[7-9].

There was a minor conflict about log4j in branch-2.8 that needs to be taken 
care of.

Verified the test passes locally on branch-2 and branch-2.8 before pushing.

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 2.8.5, 2.7.8, 3.0.4, 3.2, 3.1.2
>
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15670) UserGroupInformation TGT renewer thread doesn't use monotonically increasing time for calculating interval to sleep

2018-08-17 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584181#comment-16584181
 ] 

Hrishikesh Gadre commented on HADOOP-15670:
---

{quote}I think we document in our Time class that we don't trust nanotime to be 
very monotonic.
{quote}
Well... the javadoc for Time::now() API clearly states that it should not be 
used for measuring elapsed time and rather Time::monotonicNow() should be used.

https://github.com/apache/hadoop/blob/8d7c93186e3090b19aa59006bb6b32ba929bd8e6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Time.java#L49-L57

> UserGroupInformation TGT renewer thread doesn't use monotonically increasing 
> time for calculating interval to sleep
> ---
>
> Key: HADOOP-15670
> URL: https://issues.apache.org/jira/browse/HADOOP-15670
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> As per the [documentation of Time#now() 
> method|https://github.com/apache/hadoop/blob/74411ce0ce7336c0f7bb5793939fdd64a5dcdef6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Time.java#L49-L57],
>  it should not be used for calculating duration or interval to sleep. But the 
> TGT renewer thread in UserGroupInformation object doesn't follow this 
> recommendation,
> [https://github.com/apache/hadoop/blob/74411ce0ce7336c0f7bb5793939fdd64a5dcdef6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L892-L899]
> This should be fixed to use Time.monotonicNow() API instead.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584173#comment-16584173
 ] 

Xiao Chen commented on HADOOP-15674:


+1, committing. Thanks for the working on this through the finish line 
[~snemeth].

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15635) s3guard set-capacity command to fail fast if bucket is unguarded

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584135#comment-16584135
 ] 

genericqa commented on HADOOP-15635:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 33m 
21s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
49s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15635 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936017/HADOOP-15635.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 59ee491eba9e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fa121eb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15054/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15054/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15054/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard set-capacity command to fail fast if buc

[jira] [Resolved] (HADOOP-15519) KMS fails to read the existing key metadata after upgrading to JDK 1.8u171

2018-08-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-15519.
-
Resolution: Duplicate

> KMS fails to read the existing key metadata after upgrading to JDK 1.8u171 
> ---
>
> Key: HADOOP-15519
> URL: https://issues.apache.org/jira/browse/HADOOP-15519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.3
>Reporter: Vipin Rathor
>Priority: Critical
>
> Steps to reproduce are:
>  a. Setup a KMS with any OpenJDK 1.8 before u171 and create few KMS keys.
>  b. Update KMS to run with OpenJDK 1.8u171 JDK and keys can't be read 
> anymore, as can be seen below
> {code:java}
> hadoop key list -metadata
>  : null
> {code}
> c. Going back to earlier JDK version fixes the issue.
>  
> There are no direct error / stacktrace in kms.log when it is not able to read 
> the key metadata. Only Java serialization INFO messages are printed, followed 
> by this one empty line in log which just says:
> {code:java}
> ERROR RangerKeyStore - 
> {code}
> In some cases, kms.log can also have these lines:
> {code:java}
> 2018-05-18 10:40:46,438 DEBUG RangerKmsAuthorizer - <== 
> RangerKmsAuthorizer.assertAccess(null, rangerkms/node1.host@env.com 
> (auth:KERBEROS), GET_METADATA) 
> 2018-05-18 10:40:46,598 INFO serialization - ObjectInputFilter REJECTED: 
> class org.apache.hadoop.crypto.key.RangerKeyStoreProvider$KeyMetadata, array 
> length: -1, nRefs: 1, depth: 1, bytes: 147, ex: n/a
> 2018-05-18 10:40:46,598 ERROR RangerKeyStore - 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15635) s3guard set-capacity command to fail fast if bucket is unguarded

2018-08-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15635:

Status: Patch Available  (was: In Progress)

> s3guard set-capacity command to fail fast if bucket is unguarded
> 
>
> Key: HADOOP-15635
> URL: https://issues.apache.org/jira/browse/HADOOP-15635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15635.001.patch
>
>
> If you try to do {{hadoop s3guard set-capacity s3a://landsat-pds}}, or any 
> other bucket which exists but doesn't have s3guard enabled, you get a stack 
> trace reporting that the ddb table doesn't exist.
> the command should check for the bucket being guarded and fail on that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15635) s3guard set-capacity command to fail fast if bucket is unguarded

2018-08-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15635:

Attachment: HADOOP-15635.001.patch

> s3guard set-capacity command to fail fast if bucket is unguarded
> 
>
> Key: HADOOP-15635
> URL: https://issues.apache.org/jira/browse/HADOOP-15635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15635.001.patch
>
>
> If you try to do {{hadoop s3guard set-capacity s3a://landsat-pds}}, or any 
> other bucket which exists but doesn't have s3guard enabled, you get a stack 
> trace reporting that the ddb table doesn't exist.
> the command should check for the bucket being guarded and fail on that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15635) s3guard set-capacity command to fail fast if bucket is unguarded

2018-08-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15635:

Attachment: HADOOP-15635.001.patch

> s3guard set-capacity command to fail fast if bucket is unguarded
> 
>
> Key: HADOOP-15635
> URL: https://issues.apache.org/jira/browse/HADOOP-15635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do {{hadoop s3guard set-capacity s3a://landsat-pds}}, or any 
> other bucket which exists but doesn't have s3guard enabled, you get a stack 
> trace reporting that the ddb table doesn't exist.
> the command should check for the bucket being guarded and fail on that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15635) s3guard set-capacity command to fail fast if bucket is unguarded

2018-08-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15635:

Attachment: (was: HADOOP-15635.001.patch)

> s3guard set-capacity command to fail fast if bucket is unguarded
> 
>
> Key: HADOOP-15635
> URL: https://issues.apache.org/jira/browse/HADOOP-15635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do {{hadoop s3guard set-capacity s3a://landsat-pds}}, or any 
> other bucket which exists but doesn't have s3guard enabled, you get a stack 
> trace reporting that the ddb table doesn't exist.
> the command should check for the bucket being guarded and fail on that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2018-08-17 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-8807:
-
Fix Version/s: 3.2.0

> Update README and website to reflect HADOOP-8662
> 
>
> Key: HADOOP-8807
> URL: https://issues.apache.org/jira/browse/HADOOP-8807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: HADOOP-8807.01.patch
>
>
> HADOOP-8662 removed the various tabs from the website. Our top-level 
> README.txt and the generated docs refer to them (eg hadoop.apache.org/core, 
> /hdfs etc). Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583815#comment-16583815
 ] 

genericqa commented on HADOOP-15674:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 38m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 41m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 35m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935989/HADOOP-15674.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be6f569f765b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c67b065 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15053/testReport/ |
| Max. process+thread count | 1352 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15053/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> 

[jira] [Commented] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583732#comment-16583732
 ] 

Hudson commented on HADOOP-8807:


FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14793 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14793/])
HADOOP-8807. Update README and website to reflect HADOOP-8662. (elek: rev 
77b015000a48545209928e31630adaaf6960b4c5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/docs/libhdfs_footer.html
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ServletUtil.java
* (edit) hadoop-mapreduce-project/pom.xml
* (edit) README.txt


> Update README and website to reflect HADOOP-8662
> 
>
> Key: HADOOP-8807
> URL: https://issues.apache.org/jira/browse/HADOOP-8807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HADOOP-8807.01.patch
>
>
> HADOOP-8662 removed the various tabs from the website. Our top-level 
> README.txt and the generated docs refer to them (eg hadoop.apache.org/core, 
> /hdfs etc). Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8662) remove separate pages for Common, HDFS & MR projects

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583733#comment-16583733
 ] 

Hudson commented on HADOOP-8662:


FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14793 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14793/])
HADOOP-8807. Update README and website to reflect HADOOP-8662. (elek: rev 
77b015000a48545209928e31630adaaf6960b4c5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/docs/libhdfs_footer.html
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ServletUtil.java
* (edit) hadoop-mapreduce-project/pom.xml
* (edit) README.txt


> remove separate pages for Common, HDFS & MR projects
> 
>
> Key: HADOOP-8662
> URL: https://issues.apache.org/jira/browse/HADOOP-8662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: site
>Reporter: Doug Cutting
>Assignee: Doug Cutting
>Priority: Minor
> Fix For: site
>
> Attachments: HADOOP-8662.patch, HADOOP-8662.patch
>
>
> The tabs on the top of http://hadoop.apache.org/ link to separate sites for 
> Common, HDFS and MapReduce modules.  These sites are identical except for the 
> mailing lists.  I propose we move the mailing list information to the TLP 
> mailing list page and remove these sub-project websites.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2018-08-17 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-8807:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Just committed to the trunk. Thanks the contribution [~boky01]

> Update README and website to reflect HADOOP-8662
> 
>
> Key: HADOOP-8807
> URL: https://issues.apache.org/jira/browse/HADOOP-8807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HADOOP-8807.01.patch
>
>
> HADOOP-8662 removed the various tabs from the website. Our top-level 
> README.txt and the generated docs refer to them (eg hadoop.apache.org/core, 
> /hdfs etc). Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2018-08-17 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583668#comment-16583668
 ] 

Elek, Marton commented on HADOOP-8807:
--

While the redirects still working, I agree that it's better to use the right 
url-s.

The patch looks good to me. Will commit it to the trunk shortly.

> Update README and website to reflect HADOOP-8662
> 
>
> Key: HADOOP-8807
> URL: https://issues.apache.org/jira/browse/HADOOP-8807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HADOOP-8807.01.patch
>
>
> HADOOP-8662 removed the various tabs from the website. Our top-level 
> README.txt and the generated docs refer to them (eg hadoop.apache.org/core, 
> /hdfs etc). Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2018-08-17 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-8807:
-
Priority: Trivial  (was: Major)

> Update README and website to reflect HADOOP-8662
> 
>
> Key: HADOOP-8807
> URL: https://issues.apache.org/jira/browse/HADOOP-8807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HADOOP-8807.01.patch
>
>
> HADOOP-8662 removed the various tabs from the website. Our top-level 
> README.txt and the generated docs refer to them (eg hadoop.apache.org/core, 
> /hdfs etc). Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2018-08-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583579#comment-16583579
 ] 

genericqa commented on HADOOP-10219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ipc.Client$Connection.connectingThread; locked 57% of time  
Unsynchronized access at Client.java:57% of time  Unsynchronized access at 
Client.java:[line 1224] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-10219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767388/HADOOP-10219.v1.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cbef0e8dc67d 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1697a02 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15052/artifact/out/ne

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-17 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583576#comment-16583576
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for your review

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch, HADOOP-15633.004.patch, HADOOP-15633.005.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.

[jira] [Commented] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583554#comment-16583554
 ] 

Szilard Nemeth commented on HADOOP-15674:
-

Thanks [~xiaochen] for your comments!
Yes, unfortunately my branch was diffed against origin/trunk and it was not 
rebased to the latest commit, this caused the FSNameSystem sneaked into the 
patch, thanks for pointing that out.
Added {{System.clearProperty}} as you suggested.
Good to know that in case of patch can be applied to other branches clearly, I 
wouldn't need to upload the patch for all branches.

Please check my latest patch if it's fine.

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15674) Test failure TestSSLHttpServer.testExcludedCiphers with TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite

2018-08-17 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated HADOOP-15674:

Attachment: HADOOP-15674.004.patch

> Test failure TestSSLHttpServer.testExcludedCiphers with 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 cipher suite
> --
>
> Key: HADOOP-15674
> URL: https://issues.apache.org/jira/browse/HADOOP-15674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Gabor Bota
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15674-branch-2.001.patch, 
> HADOOP-15674-branch-2.002.patch, HADOOP-15674-branch-2.003.patch, 
> HADOOP-15674-branch-3.0.0.001.patch, HADOOP-15674-branch-3.0.0.002.patch, 
> HADOOP-15674-branch-3.0.0.003.patch, HADOOP-15674.001.patch, 
> HADOOP-15674.002.patch, HADOOP-15674.003.patch, HADOOP-15674.004.patch
>
>
> Running {{hadoop/hadoop-common-project/hadoop-common# mvn test 
> -Dtest="TestSSLHttpServer#testExcludedCiphers" -Dhttps.protocols=TLSv1.2 
> -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256}} fails with:
> {noformat}
> Error Message
> No Ciphers in common, SSLHandshake must fail.
> Stacktrace
>   java.lang.AssertionError: No Ciphers in common, SSLHandshake must fail.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.http.TestSSLHttpServer.testExcludedCiphers(TestSSLHttpServer.java:178)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org