[jira] [Commented] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-06-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040332#comment-16040332
 ] 

ASF GitHub Bot commented on HADOOP-14208:
-

GitHub user wenxinhe opened a pull request:

https://github.com/apache/hadoop/pull/229

HADOOP-14208. Fix typo in the top page in branch-2.8



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wenxinhe/hadoop HADOOP-14208

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/229.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #229


commit 547392f71b6966ad7e9e944a714347501f6dfd32
Author: wenxin he 
Date:   2017-06-07T06:56:25Z

HADOOP-14208. Fix typo in the top page in branch-2.8




> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-06-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040313#comment-16040313
 ] 

ASF GitHub Bot commented on HADOOP-14208:
-

Github user wenxinhe closed the pull request at:

https://github.com/apache/hadoop/pull/228


> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-06-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040306#comment-16040306
 ] 

ASF GitHub Bot commented on HADOOP-14208:
-

GitHub user wenxinhe opened a pull request:

https://github.com/apache/hadoop/pull/228

HADOOP-14208. Fix typo in the top page in branch-2.8



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wenxinhe/hadoop HADOOP-14208

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/228.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #228


commit 0627c53d319335c1d770225dce133a28bc8f39fd
Author: 何文鑫10087558 
Date:   2017-06-07T06:34:49Z

HADOOP-14208. Fix typo in the top page in branch-2.8




> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14479) Erasurecode testcase failures with ISA-L

2017-06-06 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040293#comment-16040293
 ] 

SammiChen commented on HADOOP-14479:


Sure,  [~andrew.wang],  I will look into it. 

> Erasurecode testcase failures with ISA-L 
> -
>
> Key: HADOOP-14479
> URL: https://issues.apache.org/jira/browse/HADOOP-14479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
> Environment: x86_64 Ubuntu 16.04.02 LTS
>Reporter: Ayappan
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> I built hadoop with ISA-L support. I took the ISA-L code from 
> https://github.com/01org/isa-l  (tag v2.18.0) and built it. While running the 
> UTs , following three testcases are failing
> 1)TestHHXORErasureCoder
> Tests run: 7, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
> testCodingDirectBuffer_10x4_erasing_p1(org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder)
>   Time elapsed: 0.029 sec  <<< FAILURE!
> java.lang.AssertionError: Decoding and comparing failed.
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.io.erasurecode.TestCoderBase.compareAndVerify(TestCoderBase.java:170)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.compareAndVerify(TestErasureCoderBase.java:141)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.performTestCoding(TestErasureCoderBase.java:98)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.testCoding(TestErasureCoderBase.java:69)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder.testCodingDirectBuffer_10x4_erasing_p1(TestHHXORErasureCoder.java:64)
> 2)TestRSErasureCoder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - 
> in org.apache.hadoop.io.erasurecode.coder.TestXORCoder
> Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f486a28a6e4, pid=8970, tid=0x7f4850927700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 
> 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x8e6e4]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /home/ayappan/hadoop/hadoop-common-project/hadoop-common/hs_err_pid8970.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> 3)TestCodecRawCoderMapping
> Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> testRSDefaultRawCoder(org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping)
>   Time elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping.testRSDefaultRawCoder(TestCodecRawCoderMapping.java:58)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14479) Erasurecode testcase failures with ISA-L

2017-06-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14479:
-
  Labels: hdfs-ec-3.0-must-do  (was: )
Priority: Critical  (was: Major)
Target Version/s: 3.0.0-alpha4

> Erasurecode testcase failures with ISA-L 
> -
>
> Key: HADOOP-14479
> URL: https://issues.apache.org/jira/browse/HADOOP-14479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
> Environment: x86_64 Ubuntu 16.04.02 LTS
>Reporter: Ayappan
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> I built hadoop with ISA-L support. I took the ISA-L code from 
> https://github.com/01org/isa-l  (tag v2.18.0) and built it. While running the 
> UTs , following three testcases are failing
> 1)TestHHXORErasureCoder
> Tests run: 7, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
> testCodingDirectBuffer_10x4_erasing_p1(org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder)
>   Time elapsed: 0.029 sec  <<< FAILURE!
> java.lang.AssertionError: Decoding and comparing failed.
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.io.erasurecode.TestCoderBase.compareAndVerify(TestCoderBase.java:170)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.compareAndVerify(TestErasureCoderBase.java:141)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.performTestCoding(TestErasureCoderBase.java:98)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.testCoding(TestErasureCoderBase.java:69)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder.testCodingDirectBuffer_10x4_erasing_p1(TestHHXORErasureCoder.java:64)
> 2)TestRSErasureCoder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - 
> in org.apache.hadoop.io.erasurecode.coder.TestXORCoder
> Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f486a28a6e4, pid=8970, tid=0x7f4850927700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 
> 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x8e6e4]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /home/ayappan/hadoop/hadoop-common-project/hadoop-common/hs_err_pid8970.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> 3)TestCodecRawCoderMapping
> Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> testRSDefaultRawCoder(org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping)
>   Time elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping.testRSDefaultRawCoder(TestCodecRawCoderMapping.java:58)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14479) Erasurecode testcase failures with ISA-L

2017-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040282#comment-16040282
 ] 

Andrew Wang commented on HADOOP-14479:
--

Thanks for the report [~Ayappan]. I tried this locally too and ran into the 
same problem.

We're supposed to be running the ISA-L tests as part of precommit, so I'm not 
sure what happened here. Seems like there's a test gap for ISA-L vs. the Java 
coder (time to revisit HDFS-11066?)

[~Sammi] / [~drankye] could you assist with debugging this?

> Erasurecode testcase failures with ISA-L 
> -
>
> Key: HADOOP-14479
> URL: https://issues.apache.org/jira/browse/HADOOP-14479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
> Environment: x86_64 Ubuntu 16.04.02 LTS
>Reporter: Ayappan
>  Labels: hdfs-ec-3.0-must-do
>
> I built hadoop with ISA-L support. I took the ISA-L code from 
> https://github.com/01org/isa-l  (tag v2.18.0) and built it. While running the 
> UTs , following three testcases are failing
> 1)TestHHXORErasureCoder
> Tests run: 7, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
> testCodingDirectBuffer_10x4_erasing_p1(org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder)
>   Time elapsed: 0.029 sec  <<< FAILURE!
> java.lang.AssertionError: Decoding and comparing failed.
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.io.erasurecode.TestCoderBase.compareAndVerify(TestCoderBase.java:170)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.compareAndVerify(TestErasureCoderBase.java:141)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.performTestCoding(TestErasureCoderBase.java:98)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.testCoding(TestErasureCoderBase.java:69)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder.testCodingDirectBuffer_10x4_erasing_p1(TestHHXORErasureCoder.java:64)
> 2)TestRSErasureCoder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - 
> in org.apache.hadoop.io.erasurecode.coder.TestXORCoder
> Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f486a28a6e4, pid=8970, tid=0x7f4850927700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 
> 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x8e6e4]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /home/ayappan/hadoop/hadoop-common-project/hadoop-common/hs_err_pid8970.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> 3)TestCodecRawCoderMapping
> Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> testRSDefaultRawCoder(org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping)
>   Time elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping.testRSDefaultRawCoder(TestCodecRawCoderMapping.java:58)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-06-06 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040242#comment-16040242
 ] 

wenxin he commented on HADOOP-14208:


hi [~ajisakaa], sorry to bother you.

I'll open a pull request to fix this typo later. Before that, I want to make 
sure: another 'specified' typo found in 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md too. 
Should I fix it together in this patch or create another issue to fix? 

> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14476) make InconsistentAmazonS3Client usable in downstream tests

2017-06-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040181#comment-16040181
 ] 

Aaron Fabbri commented on HADOOP-14476:
---

Sounds good [~ste...@apache.org].  I'm doing a little bit of test cleanup while 
I'm at it.  I'll post a followup tomorrow.

> make InconsistentAmazonS3Client usable in downstream tests
> --
>
> Key: HADOOP-14476
> URL: https://issues.apache.org/jira/browse/HADOOP-14476
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
> Attachments: HADOOP-14476-HADOOP-13345.001.patch
>
>
> It's important for downstream apps to be able to verify that s3guard works by 
> making the AWS client inconsistent (so demonstrate problems), then turn 
> s3guard on to verify that they go away. 
> This can be done by exposing the {{InconsistentAmazonS3Client}}
> # move the factory to the production source
> # make delay configurable for when you want a really long delay
> # have factory code log @ warn when a non-default factory is used.
> # mention in s3a testing.md
> I think we could look at the name of the option, 
> {{fs.s3a.s3.client.factory.impl}} too. I'd like something which has 
> "internal" in it, and without the duplication of s3a.s3



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040054#comment-16040054
 ] 

Xiao Chen commented on HADOOP-13854:


Hi [~yzhangal],
Could you please take a look when you have time? Thanks a lot.

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch, HADOOP-13854.02.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ

2017-06-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040025#comment-16040025
 ] 

Kai Zheng commented on HADOOP-14146:


In addition to above comments, some more:

1. Ref. below, NT_GSS_KRB5_PRINCIPAL could be NT_GSS_KRB5_PRINCIPAL_OID.
{code}
+  public static final Oid GSS_SPNEGO_MECH_OID =
+  getNumericOidInstance("1.3.6.1.5.5.2");
+  public static final Oid GSS_KRB5_MECH_OID =
+  getNumericOidInstance("1.2.840.113554.1.2.2");
+  public static final Oid NT_GSS_KRB5_PRINCIPAL =
+  getNumericOidInstance("1.2.840.113554.1.2.2.1");
{code}

2. Ref. below, the message would be more specific like "Invalid server 
principal {} decoded from client request".
{code}
+final String serverPrincipal =
+KerberosUtil.getTokenServerName(clientToken);
+if (!serverPrincipal.startsWith("HTTP/")) {
+  throw new IllegalArgumentException(
+  "Invalid server principal: " + serverPrincipal);
+}
{code}
3. You get rid of the login check for each HTTP server principal listed in the 
keytab, instead, you put them into the server subject directly or manually. Is 
it possible a server principal expired or invalid at the time? 
{code}
-  for (String spnegoPrincipal : spnegoPrincipals) {
-LOG.info("Login using keytab {}, for principal {}",
-keytab, spnegoPrincipal);
-final KerberosConfiguration kerberosConfiguration =
-new KerberosConfiguration(keytab, spnegoPrincipal);
-final LoginContext loginContext =
-new LoginContext("", serverSubject, null, kerberosConfiguration);
-try {
-  loginContext.login();
-} catch (LoginException le) {
-  LOG.warn("Failed to login as [{}]", spnegoPrincipal, le);
-  throw new AuthenticationException(le);  
-}
-loginContexts.add(loginContext);
{code}
4. Besides you might want to call {{KerberosUtil.hasKerberosKeyTab}} with the 
placed keytab instance in the subject, wonder how the instance would be used in 
the subsequent SPNEGO authenticating to the client token. Could you help 
explain some bit for me or as comment for the code? Thanks!
{code}
+  KeyTab keytabInstance = KeyTab.getInstance(keytabFile);
+  serverSubject.getPrivateCredentials().add(keytabInstance);
{code}
5. Is is a good chance to move the follow block to somewhere like 
{{KerberosUtil}}?
{code}
/* Return the OS login module class name */
private static String getOSLoginModuleName() {
  if (IBM_JAVA) {
if (windows) {
  return is64Bit ? "com.ibm.security.auth.module.Win64LoginModule"
  : "com.ibm.security.auth.module.NTLoginModule";
} else if (aix) {
  return is64Bit ? "com.ibm.security.auth.module.AIX64LoginModule"
  : "com.ibm.security.auth.module.AIXLoginModule";
} else {
  return "com.ibm.security.auth.module.LinuxLoginModule";
}
  } else {
return windows ? "com.sun.security.auth.module.NTLoginModule"
: "com.sun.security.auth.module.UnixLoginModule";
  }
}
{code}

> KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
> 
>
> Key: HADOOP-14146
> URL: https://issues.apache.org/jira/browse/HADOOP-14146
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14146.1.patch, HADOOP-14146.2.patch, 
> HADOOP-14146.patch
>
>
> Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add 
> multiple SPN host and/or realm support to spnego authentication.  The basic 
> problem is the server tries to guess and/or brute force what SPN the client 
> used.  The server should just decode the SPN from the AP-REQ.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-06-06 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040018#comment-16040018
 ] 

Yonger commented on HADOOP-14475:
-

In my case, i just run DFSIO with 20 map/reduce tasks in a 4-nodes cluster. And 
i enable debug log to console, if you grep by "Metrics system initialized" the 
attached log file, you will find the s3a file system has been initialized 
multiple times under one bucket(s3a://test-bucket). 

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
> Attachments: s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-06-06 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14475:

Attachment: stdout.zip

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
> Attachments: s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040012#comment-16040012
 ] 

Hadoop QA commented on HADOOP-14500:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14500 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871738/HADOOP-14500-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4a563314025c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b65100c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12463/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12463/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-14500-001.patch
>
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> c

[jira] [Updated] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-14500:
--
Status: Patch Available  (was: Open)

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-14500-001.patch
>
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-14500:
--
Attachment: HADOOP-14500-001.patch

Tests were executed against WASB/Japan region. 

{noformat}
Running org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.838 sec - in 
org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Running org.apache.hadoop.fs.azure.TestAzureFileSystemErrorConditions
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.786 sec - in 
org.apache.hadoop.fs.azure.TestAzureFileSystemErrorConditions

Results :

Failed tests:
  
TestNativeAzureFileSystemMocked>NativeAzureFileSystemBaseTest.testFolderLastModifiedTime:649
 null

Tests run: 703, Failures: 1, Errors: 0, Skipped: 119
{noformat}

Test case failure is not related to the patch.

{noformat}
testFolderLastModifiedTime(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked)
  Time elapsed: 15.023 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystemBaseTest.testFolderLastModifiedTime(NativeAzureFileSystemBaseTest.java:649)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

{noformat}

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-14500-001.patch
>
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-06-06 Thread wenxin he (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenxin he reassigned HADOOP-14208:
--

Assignee: wenxin he

> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039927#comment-16039927
 ] 

Andrew Wang commented on HADOOP-14501:
--

Looking at https://github.com/FasterXML/aalto-xml/commits/master this library 
doesn't look like it's under active development. Two commits in 2016 and 2017 
combined.

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Priority: Blocker
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference (&bar;) encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference (&wacky;) encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-06-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14399:
-
Fix Version/s: 3.0.0-alpha4

> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan reassigned HADOOP-14500:
-

Assignee: Rajesh Balamohan

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039911#comment-16039911
 ] 

Hadoop QA commented on HADOOP-13854:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13854 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871721/HADOOP-13854.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0210ca28b02e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c31cb87 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12461/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12461/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch, HADOOP-13854.02.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop

[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039906#comment-16039906
 ] 

Hadoop QA commented on HADOOP-14457:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-common-project/hadoop-common in HADOOP-13345 
has 19 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 23s{color} | {color:orange} root: The patch generated 5 new + 12 unchanged - 
0 fixed = 17 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871703/HADOOP-14457-HADOOP-13345.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 06d542d404d1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 76b0751 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12459/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12459/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-B

[jira] [Updated] (HADOOP-14448) Play nice with ITestS3AEncryptionSSEC

2017-06-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14448:
---
Attachment: HADOOP-14448-HADOOP-13345.001.patch

Attaching patch that skips the tests that usually only throw 
AccessDeniedException because of metadata operations that we short-circuit when 
S3Guard is enabled.

While testing this, I've seen testCreateFileThenMoveWithDifferentSSECKey and 
testCreateFileThenReadWithDifferentSSECKey fail (in the sense that they fail to 
throw an exception) both with and without S3Guard. So it seems unrelated to 
S3Guard, but I'm wondering if we're actually hitting a consistency issue, where 
there's no object, or there's an old unencrypted object or something and so the 
test sequence doesn't throw an AccessDeniedException? Just thinking out loud.

But this patch addresses what appears to be the only S3Guard-related part of 
the problem.

> Play nice with ITestS3AEncryptionSSEC
> -
>
> Key: HADOOP-14448
> URL: https://issues.apache.org/jira/browse/HADOOP-14448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Sean Mackrory
> Attachments: HADOOP-14448-HADOOP-13345.001.patch
>
>
> HADOOP-14035 hasn't yet been merged with HADOOP-13345, but it adds tests that 
> will break when run with S3Guard enabled. It expects that certain filesystem 
> actions will throw exceptions when the client-provided encryption key is not 
> configured properly, but those actions may sometimes bypass S3 entirely 
> thanks to S3Guard (for example, getFileStatus may not actually need to invoke 
> s3GetFileStatus). If the exception is never thrown, the test fails.
> At a minimum we should tweak the tests so they definitely invoke S3 directly, 
> or just skip the offending tests when anything but the Null implementation is 
> in use. This also opens the larger question of whether or not S3Guard should 
> be serving up metadata that is otherwise only accessible when an encryption 
> key is provided.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039880#comment-16039880
 ] 

Andrew Wang commented on HADOOP-14501:
--

[~jeagles] do you mind triaging this? Starting it off as a blocker since I 
think these same issues affect Hadoop too. If so, aalto-stax might not be ready 
for primetime.

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Priority: Blocker
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference (&bar;) encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference (&wacky;) encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-06 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14501:


 Summary: aalto-xml cannot handle some odd XML features
 Key: HADOOP-14501
 URL: https://issues.apache.org/jira/browse/HADOOP-14501
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.9.0, 3.0.0-alpha4
Reporter: Andrew Wang
Priority: Blocker


[~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
failures due to what look like functionality gaps in the new aalto-xml stax 
implementation pulled in by HADOOP-14216:

{noformat}
   [junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
character ('ü' (code 252))

   [junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
reference (&bar;) encountered in entity expanding mode: operation not (yet) 
implemented
...
   [junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
entity reference (&wacky;) encountered in entity expanding mode: operation not 
(yet) implemented
{noformat}

These were from the following test case executions:

{noformat}
NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
-Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
-Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
-Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
-Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
-Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
-Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
-Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA -Dtests.slow=true 
-Dtests.locale=hr -Dtests.timezone=America/Barbados -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039861#comment-16039861
 ] 

Xiao Chen commented on HADOOP-13854:


Patch 2 to fix the style issue.

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch, HADOOP-13854.02.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13854:
---
Attachment: HADOOP-13854.02.patch

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch, HADOOP-13854.02.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13174) Add more debug logs for delegation tokens and authentication

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13174:
---
Attachment: HADOOP-13174.05.patch

Reviving this
Patch 5 to add a info log in {{AbstractDelegationTokenSecretManager}} when 
expired tokens are removed - currently we log when create/renew/cancel, but not 
when removal upon expiration.

These logs can be seen by running KMS unit tests: set the logger in 
{{hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties}} to 
{{TRACE}}, and run {{TestKMS#testDelegationTokenAccess}} for example.

> Add more debug logs for delegation tokens and authentication
> 
>
> Key: HADOOP-13174
> URL: https://issues.apache.org/jira/browse/HADOOP-13174
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13174.01.patch, HADOOP-13174.02.patch, 
> HADOOP-13174.03.patch, HADOOP-13174.04.patch, HADOOP-13174.05.patch
>
>
> Recently I debugged several authentication related problems, and found that 
> the debug logs are not enough to identify a problem.
> This jira improves it by adding more debug/trace logs along the line.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14461) Azure: handle failure gracefully in case of missing account access key

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039842#comment-16039842
 ] 

Hadoop QA commented on HADOOP-14461:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 54 unchanged - 4 fixed = 54 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14461 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871708/HADOOP-14461.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f677f72b9183 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c31cb87 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12460/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12460/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: handle failure gracefully in case of missing account access key
> --
>
> Key: HADOOP-14461
> URL: https://issues.apache.org/jira/browse/HADOOP-14461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14461.000.patch, HADOOP-14461.001.patch, 
> HADOOP-14461.002.patch
>
>
> Currently if the {{fs.azure.account.key.youraccount}} is missing, we will get 
> error stack like this:
> {code}
> java.lang.IllegalArgumentException: The String is not a valid Base64-encoded 
> string.
>   at com.microsoft.azure.storage.core.Base64.decode(Base64.java:63)
>

[jira] [Updated] (HADOOP-14461) Azure: handle failure gracefully in case of missing account access key

2017-06-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14461:
---
Attachment: HADOOP-14461.002.patch

While the account access key is missing,

{code:title=before}
$ hadoop fs -ls wasb://t...@liuml07.blob.core.windows.net/
ls: org.apache.hadoop.fs.azure.AzureException: Container test in account 
liuml07.blob.core.windows.net not found, and we can't create it using 
anoynomous credentials, and no credentials found for them in the configuration.
{code}

{code:title=after}
$ hadoop fs -ls wasb://t...@liuml07.blob.core.windows.net/
ls: org.apache.hadoop.fs.azure.AzureException: No credentials found for account 
liuml07.blob.core.windows.net in the configuration, and we are unable to access 
container test in this account using anonymous credentials. Please check if the 
container exists first. If it is not publicly available, you have to provide 
account credentials
{code}

The v2 patch has been tested against US WEST region. All tests pass except 
{{TestFileSystemOperationExceptionHandling}} and 
{{TestFileSystemOperationsExceptionHandlingMultiThreaded}} which are not 
related here. They are caused by [HADOOP-14478] and are being tracked in 
[HADOOP-14500].
{code:title=after reverting HADOOP-14478}
hadoop/hadoop-tools/hadoop-azure $ mvn test -q
Results :

Tests run: 703, Failures: 0, Errors: 0, Skipped: 119
{code}

Ping [~snayak] and [~ste...@apache.org] for review.

> Azure: handle failure gracefully in case of missing account access key
> --
>
> Key: HADOOP-14461
> URL: https://issues.apache.org/jira/browse/HADOOP-14461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14461.000.patch, HADOOP-14461.001.patch, 
> HADOOP-14461.002.patch
>
>
> Currently if the {{fs.azure.account.key.youraccount}} is missing, we will get 
> error stack like this:
> {code}
> java.lang.IllegalArgumentException: The String is not a valid Base64-encoded 
> string.
>   at com.microsoft.azure.storage.core.Base64.decode(Base64.java:63)
>   at 
> com.microsoft.azure.storage.StorageCredentialsAccountAndKey.(StorageCredentialsAccountAndKey.java:81)
>   at 
> org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.createStorageAccount(AzureBlobStorageTestAccount.java:464)
>   at 
> org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.createTestAccount(AzureBlobStorageTestAccount.java:501)
>   at 
> org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.create(AzureBlobStorageTestAccount.java:522)
>   at 
> org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.create(AzureBlobStorageTestAccount.java:451)
>   at 
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization.createTestAccount(TestNativeAzureFileSystemAuthorization.java:50)
>   at 
> org.apache.hadoop.fs.azure.AbstractWasbTestBase.setUp(AbstractWasbTestBase.java:47)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUni

[jira] [Comment Edited] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused

2017-06-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039792#comment-16039792
 ] 

Erik Krogen edited comment on HADOOP-14492 at 6/6/17 10:45 PM:
---

Hey [~cltlfcjin] - these metrics can be different since the RpcDetailed metric 
is the time to complete a DN's entire block report, which may include multiple 
storages (more than one value for {{dfs.datanode.data.dir}}), each of which 
contributes to the NameNodeActivity metric. In the cluster this was recorded 
from, do your datanodes run with multiple storages? Even without multiple 
storages, the AvgTime may be different because the RpcDetailed time is measured 
from different points in the code (e.g. it includes the time to acquire the 
FSNamesystem lock which the NameNodeActivity metric does not).

Assigning to myself in case there is in fact a bug since HADOOP-13782 was my 
patch.


was (Author: xkrogen):
Hey [~cltlfcjin] - these metrics can be different since the RpcDetailed metric 
is the time to complete a DN's entire block report, which may include multiple 
storages (more than one value for {{dfs.datanode.data.dir}}), each of which 
contributes to the NameNodeActivity metric. In the cluster this was recorded 
from, do your datanodes run with multiple storages?

Assigning to myself in case there is in fact a bug since HADOOP-13782 was my 
patch.

> RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction 
> cause the Xavgtime confused
> -
>
> Key: HADOOP-14492
> URL: https://issues.apache.org/jira/browse/HADOOP-14492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Lantao Jin
>Assignee: Erik Krogen
>Priority: Minor
>
> For performance purpose, 
> [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the 
> metrics behaviour in {{RpcDetailedMetrics}}.
> In 2.7.4:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRatesWithAggregation rates;
> {code}
> In old version:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRates rates;
> {code}
> But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old 
> version:
> {code}
> public class NameNodeMetrics {
>   @Metric("Block report") MutableRate blockReport;
> {code}
> It causes the metrics in JMX very different between them.
> {quote}
> name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030",
> modelerType: "RpcDetailedActivityForPort8030",
> tag.port: "8030",
> tag.Context: "rpcdetailed",
> ...
> BlockReportNumOps: 237634,
> BlockReportAvgTime: 1382,
> ...
> name: "Hadoop:service=NameNode,name=NameNodeActivity",
> modelerType: "NameNodeActivity",
> tag.ProcessName: "NameNode",
> ...
> BlockReportNumOps: 2592932,
> BlockReportAvgTime: 19.258064516129032,
> ...
> {quote}
> In the old version. They are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13720:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14497

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch, HADOOP-13720.007.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039810#comment-16039810
 ] 

Mingliang Liu edited comment on HADOOP-14500 at 6/6/17 10:43 PM:
-

The reason may be that, [HADOOP-14478] avoids re-open of streams while seeking, 
so that the expected exception are not thrown in test. We can fix the test here 
I think. Ping [~snayak].


was (Author: liuml07):
The reason may be that, [HADOOP-14478] avoids re-open of streams while seeking, 
so that the expected exception are not thrown in test. We can fix the test here 
I thin. Ping [~snayak].

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039810#comment-16039810
 ] 

Mingliang Liu commented on HADOOP-14500:


The reason may be that, [HADOOP-14478] avoids re-open of streams while seeking, 
so that the expected exception are not thrown in test. We can fix the test here 
I thin. Ping [~snayak].

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused

2017-06-06 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reassigned HADOOP-14492:


Assignee: Erik Krogen

> RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction 
> cause the Xavgtime confused
> -
>
> Key: HADOOP-14492
> URL: https://issues.apache.org/jira/browse/HADOOP-14492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Lantao Jin
>Assignee: Erik Krogen
>Priority: Minor
>
> For performance purpose, 
> [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the 
> metrics behaviour in {{RpcDetailedMetrics}}.
> In 2.7.4:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRatesWithAggregation rates;
> {code}
> In old version:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRates rates;
> {code}
> But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old 
> version:
> {code}
> public class NameNodeMetrics {
>   @Metric("Block report") MutableRate blockReport;
> {code}
> It causes the metrics in JMX very different between them.
> {quote}
> name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030",
> modelerType: "RpcDetailedActivityForPort8030",
> tag.port: "8030",
> tag.Context: "rpcdetailed",
> ...
> BlockReportNumOps: 237634,
> BlockReportAvgTime: 1382,
> ...
> name: "Hadoop:service=NameNode,name=NameNodeActivity",
> modelerType: "NameNodeActivity",
> tag.ProcessName: "NameNode",
> ...
> BlockReportNumOps: 2592932,
> BlockReportAvgTime: 19.258064516129032,
> ...
> {quote}
> In the old version. They are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused

2017-06-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039792#comment-16039792
 ] 

Erik Krogen edited comment on HADOOP-14492 at 6/6/17 10:39 PM:
---

Hey [~cltlfcjin] - these metrics can be different since the RpcDetailed metric 
is the time to complete a DN's entire block report, which may include multiple 
storages (more than one value for {{dfs.datanode.data.dir}}), each of which 
contributes to the NameNodeActivity metric. In the cluster this was recorded 
from, do your datanodes run with multiple storages?

Assigning to myself in case there is in fact a bug since HADOOP-13782 was my 
patch.


was (Author: xkrogen):
Hey [~cltlfcjin] - these metrics can be different since the RpcDetailed metric 
is the time to complete a DN's entire block report, which may include multiple 
storages (more than one value for {{dfs.datanode.data.dir}}), each of which 
contributes to the NameNodeActivity metric. In the cluster this was recorded 
from, do your datanodes run with multiple storages?

> RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction 
> cause the Xavgtime confused
> -
>
> Key: HADOOP-14492
> URL: https://issues.apache.org/jira/browse/HADOOP-14492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Lantao Jin
>Assignee: Erik Krogen
>Priority: Minor
>
> For performance purpose, 
> [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the 
> metrics behaviour in {{RpcDetailedMetrics}}.
> In 2.7.4:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRatesWithAggregation rates;
> {code}
> In old version:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRates rates;
> {code}
> But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old 
> version:
> {code}
> public class NameNodeMetrics {
>   @Metric("Block report") MutableRate blockReport;
> {code}
> It causes the metrics in JMX very different between them.
> {quote}
> name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030",
> modelerType: "RpcDetailedActivityForPort8030",
> tag.port: "8030",
> tag.Context: "rpcdetailed",
> ...
> BlockReportNumOps: 237634,
> BlockReportAvgTime: 1382,
> ...
> name: "Hadoop:service=NameNode,name=NameNodeActivity",
> modelerType: "NameNodeActivity",
> tag.ProcessName: "NameNode",
> ...
> BlockReportNumOps: 2592932,
> BlockReportAvgTime: 19.258064516129032,
> ...
> {quote}
> In the old version. They are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14500:
---
Description: 
The following test fails:
{code}
TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario 
Expected exception: java.io.FileNotFoundException
TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
 Expected exception: java.io.FileNotFoundException
{code}

I did early analysis and found [HADOOP-14478] maybe the reason. I think we can 
fix the test itself here.

  was:
The following test fails:
{code}
TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario 
Expected exception: 
java.io.FileNotFoundExceptionTestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
 Expected exception: java.io.FileNotFoundException
{code}

I did early analysis and found [HADOOP-14478] maybe the reason. I think we can 
fix the test itself here.


> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> TestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused

2017-06-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039792#comment-16039792
 ] 

Erik Krogen commented on HADOOP-14492:
--

Hey [~cltlfcjin] - these metrics can be different since the RpcDetailed metric 
is the time to complete a DN's entire block report, which may include multiple 
storages (more than one value for {{dfs.datanode.data.dir}}), each of which 
contributes to the NameNodeActivity metric. In the cluster this was recorded 
from, do your datanodes run with multiple storages?

> RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction 
> cause the Xavgtime confused
> -
>
> Key: HADOOP-14492
> URL: https://issues.apache.org/jira/browse/HADOOP-14492
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Lantao Jin
>Priority: Minor
>
> For performance purpose, 
> [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the 
> metrics behaviour in {{RpcDetailedMetrics}}.
> In 2.7.4:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRatesWithAggregation rates;
> {code}
> In old version:
> {code}
> public class RpcDetailedMetrics {
>   @Metric MutableRates rates;
> {code}
> But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old 
> version:
> {code}
> public class NameNodeMetrics {
>   @Metric("Block report") MutableRate blockReport;
> {code}
> It causes the metrics in JMX very different between them.
> {quote}
> name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030",
> modelerType: "RpcDetailedActivityForPort8030",
> tag.port: "8030",
> tag.Context: "rpcdetailed",
> ...
> BlockReportNumOps: 237634,
> BlockReportAvgTime: 1382,
> ...
> name: "Hadoop:service=NameNode,name=NameNodeActivity",
> modelerType: "NameNodeActivity",
> tag.ProcessName: "NameNode",
> ...
> BlockReportNumOps: 2592932,
> BlockReportAvgTime: 19.258064516129032,
> ...
> {quote}
> In the old version. They are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039791#comment-16039791
 ] 

Mingliang Liu commented on HADOOP-14500:


Can you verify that [~rajesh.balamohan]? The original test report in 
[HADOOP-14478] ignored some of tests, and these two may have been ignored? See 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/index.md#testing-the-hadoop-azure-module
 for enabling more tests.

> Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
> -
>
> Key: HADOOP-14500
> URL: https://issues.apache.org/jira/browse/HADOOP-14500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>
> The following test fails:
> {code}
> TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario
>  Expected exception: 
> java.io.FileNotFoundExceptionTestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
>  Expected exception: java.io.FileNotFoundException
> {code}
> I did early analysis and found [HADOOP-14478] maybe the reason. I think we 
> can fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14500) Azure: TestFileSystemOperationExceptionHandling{,MultiThreaded} fails

2017-06-06 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14500:
--

 Summary: Azure: 
TestFileSystemOperationExceptionHandling{,MultiThreaded} fails
 Key: HADOOP-14500
 URL: https://issues.apache.org/jira/browse/HADOOP-14500
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure, test
Reporter: Mingliang Liu


The following test fails:
{code}
TestFileSystemOperationExceptionHandling.testSingleThreadBlockBlobSeekScenario 
Expected exception: 
java.io.FileNotFoundExceptionTestFileSystemOperationsExceptionHandlingMultiThreaded.testMultiThreadBlockBlobSeekScenario
 Expected exception: java.io.FileNotFoundException
{code}

I did early analysis and found [HADOOP-14478] maybe the reason. I think we can 
fix the test itself here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table name configured

2017-06-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14433:
---
Fix Version/s: HADOOP-13345

> ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table 
> name configured
> --
>
> Key: HADOOP-14433
> URL: https://issues.apache.org/jira/browse/HADOOP-14433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14433-HADOOP-13345.001.patch
>
>
> test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 
> -Ddynamodblocal -Ds3guard}} failing
> {code}
> Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, 
> Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in 
> org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, 
> Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 9.744 sec  <<< ERROR! java.lang.IllegalArgumentException: No 
> DynamoDB table name configured!
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table name configured

2017-06-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14433:
---
  Resolution: Fixed
Target Version/s: HADOOP-13345
  Status: Resolved  (was: Patch Available)

> ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table 
> name configured
> --
>
> Key: HADOOP-14433
> URL: https://issues.apache.org/jira/browse/HADOOP-14433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-14433-HADOOP-13345.001.patch
>
>
> test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 
> -Ddynamodblocal -Ds3guard}} failing
> {code}
> Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, 
> Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in 
> org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, 
> Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 9.744 sec  <<< ERROR! java.lang.IllegalArgumentException: No 
> DynamoDB table name configured!
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14457:
---
Attachment: HADOOP-14457-HADOOP-13345.005.patch

Rebasing on the latest merge...

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch, HADOOP-14457-HADOOP-13345.005.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13474) Add more details in the log when a token is expired

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HADOOP-13474.

Resolution: Won't Fix

With more understanding around this area, I think this jira is not necessary.
This is because AuthenticationFilter is usually passing the authentication 
further down to the authentication handler, and that's where we should log more.
Will cover that in HADOOP-13174, so closing this one.

> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13474) Add more details in the log when a token is expired

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13474:
---
Status: Open  (was: Patch Available)

> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039735#comment-16039735
 ] 

Hadoop QA commented on HADOOP-13854:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13854 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841365/HADOOP-13854.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0c3ef1f2304f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 867903d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12458/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12458/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12458/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch
>
>
> It appears if a KMS HTTP reques

[jira] [Updated] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-06 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14499:
-
Summary: Findbugs warning in LocalMetadataStore  (was: Findbufs warning in 
LocalMetadataStore)

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table name configured

2017-06-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039708#comment-16039708
 ] 

Sean Mackrory commented on HADOOP-14433:


Pushing shortly (findbugs warning is unrelated and extant - filed HADOOP-14499 
for it). 

> ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table 
> name configured
> --
>
> Key: HADOOP-14433
> URL: https://issues.apache.org/jira/browse/HADOOP-14433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-14433-HADOOP-13345.001.patch
>
>
> test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 
> -Ddynamodblocal -Ds3guard}} failing
> {code}
> Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, 
> Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in 
> org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, 
> Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 9.744 sec  <<< ERROR! java.lang.IllegalArgumentException: No 
> DynamoDB table name configured!
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039705#comment-16039705
 ] 

Mingliang Liu commented on HADOOP-14498:


Thanks [~aw] for the prompt debugging. Your analysis makes sense to me. I'll 
have a look at [HADOOP-13595].

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14499) Findbufs warning in LocalMetadataStore

2017-06-06 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14499:
--

 Summary: Findbufs warning in LocalMetadataStore
 Key: HADOOP-14499
 URL: https://issues.apache.org/jira/browse/HADOOP-14499
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


First saw this raised by Yetus on HADOOP-14433:
{code}
Bug type UC_USELESS_OBJECT (click for details)
In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
Value ancestors
Type java.util.LinkedList
At LocalMetadataStore.java:[line 300]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039692#comment-16039692
 ] 

Allen Wittenauer edited comment on HADOOP-14498 at 6/6/17 9:40 PM:
---

OK, using 'hadoop --debug classpath' we can see exactly what is happening with 
the specific construction of "hadoop-azure,hadoop-aws,hadoop-azure-datalake"

{code}
$ bin/hadoop --debug classpath 2>&1 | grep PROFILES
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-aws
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
{code}

hadoop-azure is getting rejected by the shell profiles code because it is 
getting up in the dedupe pattern match code.  Converting this to use the new 
array code added in HADOOP-13595 will probably fix this.  Flipping the order 
will also make it pass.



was (Author: aw):
OK, using 'hadoop --debug classpath' we can see exactly what is happening with 
the specific construction of "hadoop-azure,hadoop-aws,hadoop-azure-datalake"

{code}
$ bin/hadoop --debug classpath 2>&1 | grep PROFILES
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-aws
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
{code}

hadoop-azure is getting rejected by the shell profiles code because it is 
getting up in the dedupe pattern match code.  Converting this to use the new 
array code added will probably fix this.  Flipping the order will also make it 
pass.


> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039692#comment-16039692
 ] 

Allen Wittenauer edited comment on HADOOP-14498 at 6/6/17 9:36 PM:
---

OK, using 'hadoop --debug classpath' we can see exactly what is happening with 
the specific construction of "hadoop-azure,hadoop-aws,hadoop-azure-datalake"

{code}
$ bin/hadoop --debug classpath 2>&1 | grep PROFILES
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-aws
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
{code}

hadoop-azure is getting rejected by the shell profiles code because it is 
getting up in the dedupe pattern match code.  Converting this to use the new 
array code added will probably fix this.  Flipping the order will also make it 
pass.



was (Author: aw):
OK, using 'hadoop --debug classpath' we can see exactly what is happening with 
the specific construction of "hadoop-azure,hadoop-aws,hadoop-azure-datalake"

{code}
$ bin/hadoop --debug classpath 2>&1 | grep PROFILES
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-aws
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
{code}

hadoop-azure is getting rejected by the shell profiles code because it is 
getting up in the dedupe pattern match code.  Converting this to use the new 
array code added will probably fix this.


> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039692#comment-16039692
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

OK, using 'hadoop --debug classpath' we can see exactly what is happening with 
the specific construction of "hadoop-azure,hadoop-aws,hadoop-azure-datalake"

{code}
$ bin/hadoop --debug classpath 2>&1 | grep PROFILES
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-aws
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
{code}

hadoop-azure is getting rejected by the shell profiles code because it is 
getting up in the dedupe pattern match code.  Converting this to use the new 
array code added will probably fix this.


> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14498:
--
Comment: was deleted

(was: HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar"
}
{code}

ie, we're going to add azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar to the classpath.

hadoop-azure.sh, meanwhile, says:

{code}
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar" ]]; 
then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar"
  fi
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar"
{code}

ie., azure-storage-4.2.0.jar, azure-keyvault-core-0.8.0.jar, and 
hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar.

Different dependencies are getting generated by the build and are either 
incorrect/incomplete in the pom, a bug in the dependency file generator, or 
something else going haywire.  It is not a bug in how HADOOP\_OPTIONAL\_TOOLS 
is getting parsed post-build.

You can actually verify this by using the 'hadoop classpath' command with 
different settings in hadoop-env.sh for HADOOP\_OPTIONAL\_TOOLS and 
with/without --debug:

With just hadoop-azure-datalake:
{code}
$ bin/hadoop --debug classpath
...
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-distcp.sh
...
DEBUG: Initial 
CLASSPATH=/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*
DEBUG: Profiles: hadoop-azure-datalake classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar
DEBUG: Profiles: hdfs classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*
...
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/mapreduce/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/*
{code}

You'll see both azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar present in the classpath.)

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpat

[jira] [Comment Edited] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039655#comment-16039655
 ] 

Allen Wittenauer edited comment on HADOOP-14498 at 6/6/17 9:26 PM:
---

HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar"
}
{code}

ie, we're going to add azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar to the classpath.

hadoop-azure.sh, meanwhile, says:

{code}
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar" ]]; 
then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar"
  fi
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar"
{code}

ie., azure-storage-4.2.0.jar, azure-keyvault-core-0.8.0.jar, and 
hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar.

Different dependencies are getting generated by the build and are either 
incorrect/incomplete in the pom, a bug in the dependency file generator, or 
something else going haywire.  It is not a bug in how HADOOP\_OPTIONAL\_TOOLS 
is getting parsed post-build.

You can actually verify this by using the 'hadoop classpath' command with 
different settings in hadoop-env.sh for HADOOP\_OPTIONAL\_TOOLS and 
with/without --debug:

With just hadoop-azure-datalake:
{code}
...
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-distcp.sh
...
DEBUG: Initial 
CLASSPATH=/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*
DEBUG: Profiles: hadoop-azure-datalake classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar
DEBUG: Profiles: hdfs classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*
...
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/mapreduce/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/*
{code}

You'll see both azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar present in the classpath.


was (Author: aw):
HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lak

[jira] [Comment Edited] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039655#comment-16039655
 ] 

Allen Wittenauer edited comment on HADOOP-14498 at 6/6/17 9:26 PM:
---

HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar"
}
{code}

ie, we're going to add azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar to the classpath.

hadoop-azure.sh, meanwhile, says:

{code}
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar" ]]; 
then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar"
  fi
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar"
{code}

ie., azure-storage-4.2.0.jar, azure-keyvault-core-0.8.0.jar, and 
hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar.

Different dependencies are getting generated by the build and are either 
incorrect/incomplete in the pom, a bug in the dependency file generator, or 
something else going haywire.  It is not a bug in how HADOOP\_OPTIONAL\_TOOLS 
is getting parsed post-build.

You can actually verify this by using the 'hadoop classpath' command with 
different settings in hadoop-env.sh for HADOOP\_OPTIONAL\_TOOLS and 
with/without --debug:

With just hadoop-azure-datalake:
{code}
$ bin/hadoop --debug classpath
...
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/bin/../libexec/shellprofile.d/hadoop-distcp.sh
...
DEBUG: Initial 
CLASSPATH=/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*
DEBUG: Profiles: hadoop-azure-datalake classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar
DEBUG: Profiles: hdfs classpath
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs
DEBUG: Append CLASSPATH: 
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*
...
/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/etc/hadoop:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.1.4.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/tools/lib/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/mapreduce/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/lib/*:/Users/aw/H/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/yarn/*
{code}

You'll see both azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar present in the classpath.


was (Author: aw):
HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOO

[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039655#comment-16039655
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

HADOOP\_OPTIONAL\_TOOLS basically triggers a read of 
"libexec/shellprofile.d/(whatever).sh" that is created at build time by some 
maven magic and "dev-support/bin/dist-tools-hooks-maker" .

The inside of this file (after cutting out the boiler plate) says, effectively:

{code}
function _hadoop-azure-datalake_hadoop_classpath
{
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-data-lake-store-sdk-2.1.4.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar"
}
{code}

ie, we're going to add azure-data-lake-store-sdk-2.1.4.jar and 
hadoop-azure-datalake-3.0.0-alpha4-SNAPSHOT.jar to the classpath.

hadoop-azure.sh, meanwhile, says:

{code}
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar" ]]; 
then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-storage-4.2.0.jar"
  fi
  if [[ -f 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
 ]]; then
hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/azure-keyvault-core-0.8.0.jar"
  fi
  hadoop_add_classpath 
"${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_LIB_JARS_DIR}/hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar"
{code}

ie., azure-storage-4.2.0.jar, azure-keyvault-core-0.8.0.jar, and 
hadoop-azure-3.0.0-alpha4-SNAPSHOT.jar.

Different dependencies are getting generated by the build and are either 
incorrect/incomplete in the pom, a bug in the dependency file generator, or 
something else going haywire.  It is not a bug in how HADOOP\_OPTIONAL\_TOOLS 
is getting parsed post-build.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039637#comment-16039637
 ] 

Steve Loughran commented on HADOOP-14475:
-

bq. 3.that is the issue confused me. I still don't know why the 
filesystem(S3AFileSystem) be initialized multiple times in a MR job. From 
AzureFileSystem and DataNodeMetric, their filesystem and MetricSystem should be 
only initialized once.

every connection to a different bucket will have its own FS instance, with its 
own settings; if your mapper or reducer is working with >1 bucket, you use >1 
fs. This is more obvious in things like Hive and Spark where processes are 
handling many requests from different people, and FS are actually stored 
separately for each person as well as each bucket (have a look at 
FileSystem.get())  You'd get the same with azure trying to talk to different 
buckets in the same process too. 

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
> Attachments: s3a-metrics.patch1
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-06-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039636#comment-16039636
 ] 

Aaron Fabbri commented on HADOOP-13345:
---

Not hanging for me, but took about 8 1/2 minutes to complete.

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039628#comment-16039628
 ] 

Xiao Chen commented on HADOOP-13854:


This patch is still valid.
Below is a sample message printed by it into kms.log (I added a hard-coded 
'throw' in KMS.java. In real scenarios I expect us to see the real exceptions):
{noformat}
2017-06-06 13:57:38,303 WARN  KMS - User xiao (auth:SIMPLE) request GET 
http://localhost:9600/kms/v1/key/k1/_eek?eek_op=generate&num_keys=2&user.name=xiao
 caused exception.
java.lang.Exception: test 
at 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(KMS.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
at 
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:142)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1551)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandle

[jira] [Updated] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13854:
---
Status: Patch Available  (was: Open)

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-06-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039611#comment-16039611
 ] 

Aaron Fabbri commented on HADOOP-13345:
---

I'll test it in a moment.  How long did you wait?  I thought someone increased 
the visibility delay for the inconsistent s3 client, and IIRC the test waits 2x 
that long in some cases.

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039602#comment-16039602
 ] 

Steve Loughran commented on HADOOP-13345:
-

I'm getting ITestS3GuardListConsistency hanging when I run with dynamo or 
localdynamo. Anyone else?

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14498:
---
Priority: Critical  (was: Major)

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-06-06 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14498:
--

 Summary: HADOOP_OPTIONAL_TOOLS not parsed correctly
 Key: HADOOP-14498
 URL: https://issues.apache.org/jira/browse/HADOOP-14498
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha1
Reporter: Mingliang Liu



# This will make hadoop-azure not show up in the hadoop classpath, though both 
hadoop-aws and hadoop-azure-datalake are in the 
classpath.{code:title=hadoop-env.sh}
export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
{code}
# And if we put only hadoop-azure and hadoop-aws, both of them are shown in the 
classpath.
{code:title=hadoop-env.sh}
export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
{code}

This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we make 
some assumptions that hadoop tool modules have a single "-" in names, and the 
_hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other assumptions 
about the {{${project.artifactId\}}}?

Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13474) Add more details in the log when a token is expired

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039537#comment-16039537
 ] 

Hadoop QA commented on HADOOP-13474:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13474 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13474 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822729/HADOOP-13474.01.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12456/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13174) Add more debug logs for delegation tokens and authentication

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039536#comment-16039536
 ] 

Hadoop QA commented on HADOOP-13174:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13174 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805340/HADOOP-13174.04.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12457/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more debug logs for delegation tokens and authentication
> 
>
> Key: HADOOP-13174
> URL: https://issues.apache.org/jira/browse/HADOOP-13174
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13174.01.patch, HADOOP-13174.02.patch, 
> HADOOP-13174.03.patch, HADOOP-13174.04.patch
>
>
> Recently I debugged several authentication related problems, and found that 
> the debug logs are not enough to identify a problem.
> This jira improves it by adding more debug/trace logs along the line.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13174) Add more debug logs for delegation tokens and authentication

2017-06-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13174:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14497

> Add more debug logs for delegation tokens and authentication
> 
>
> Key: HADOOP-13174
> URL: https://issues.apache.org/jira/browse/HADOOP-13174
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13174.01.patch, HADOOP-13174.02.patch, 
> HADOOP-13174.03.patch, HADOOP-13174.04.patch
>
>
> Recently I debugged several authentication related problems, and found that 
> the debug logs are not enough to identify a problem.
> This jira improves it by adding more debug/trace logs along the line.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13474) Add more details in the log when a token is expired

2017-06-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13474:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14497

> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13854) KMS should log error details even if a request is malformed

2017-06-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13854:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14497

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14496) Logs for KMS delegation token lifecycle

2017-06-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-14496.

Resolution: Duplicate

> Logs for KMS delegation token lifecycle
> ---
>
> Key: HADOOP-14496
> URL: https://issues.apache.org/jira/browse/HADOOP-14496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-14497) Logs for KMS delegation token lifecycle

2017-06-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang moved HDFS-11938 to HADOOP-14497:
---

Key: HADOOP-14497  (was: HDFS-11938)
Project: Hadoop Common  (was: Hadoop HDFS)

> Logs for KMS delegation token lifecycle
> ---
>
> Key: HADOOP-14497
> URL: https://issues.apache.org/jira/browse/HADOOP-14497
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>
> We run into quite some customer cases about authentication failures related 
> to KMS delegation token. It would be nice to see a log for each stage of the 
> token:
> 1. creation
> 2. renewal
> 3. removal upon cancel
> 4. remove upon expiration
> So that when we correlate the logs for the same DT, we can have a good 
> picture about what's going on, and what could have caused the authentication 
> failure.
> The same is applicable to other delegation tokens.
> NOTE: When log info about delagation token, we don't want leak user's secret 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14496) Logs for KMS delegation token lifecycle

2017-06-06 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-14496:
--

 Summary: Logs for KMS delegation token lifecycle
 Key: HADOOP-14496
 URL: https://issues.apache.org/jira/browse/HADOOP-14496
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yongjun Zhang






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039478#comment-16039478
 ] 

Hadoop QA commented on HADOOP-14457:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-14457 does not apply to HADOOP-13345. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871609/HADOOP-14457-HADOOP-13345.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12455/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This

[jira] [Commented] (HADOOP-14491) Azure has messed doc structure

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039455#comment-16039455
 ] 

Hudson commented on HADOOP-14491:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11832 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11832/])
HADOOP-14491. Azure has messed doc structure. Contributed by Mingliang 
(liuml07: rev 536f057158c445a57049f6c392869ae2f0be4f24)
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md


> Azure has messed doc structure
> --
>
> Key: HADOOP-14491
> URL: https://issues.apache.org/jira/browse/HADOOP-14491
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14491.000.patch, new.png, old.png
>
>
> # The _WASB Secure mode and configuration_ and _Authorization Support in 
> WASB_ sections are missing in the navigation
> # _Authorization Support in WASB_ should be header level 3 instead of level 2 
> # Some of the code format is not specified
> # Sample code indent not unified.
> Let's use the auto-generated navigation instead of manually updating it, just 
> as other documents.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14457:
---
Status: Patch Available  (was: Open)

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14472) Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039417#comment-16039417
 ] 

Hudson commented on HADOOP-14472:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11831 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11831/])
HADOOP-14472. Azure: TestReadAndSeekPageBlobAfterWrite fails (liuml07: rev 
6b5285bbcb439944ba6c4701571ffbb00258d5a1)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestReadAndSeekPageBlobAfterWrite.java


> Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently
> -
>
> Key: HADOOP-14472
> URL: https://issues.apache.org/jira/browse/HADOOP-14472
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14472.000.patch
>
>
> Reported by [HADOOP-14461]
> {code}
> testManySmallWritesWithHFlush(org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite)
>   Time elapsed: 1.051 sec  <<< FAILURE!
> java.lang.AssertionError: hflush duration of 13, less than minimum expected 
> of 20
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.writeAndReadOneFile(TestReadAndSeekPageBlobAfterWrite.java:286)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.testManySmallWritesWithHFlush(TestReadAndSeekPageBlobAfterWrite.java:247)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14491) Azure has messed doc structure

2017-06-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14491:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{branch-2}} and {{trunk}} branches. Thanks for your prompt review 
[~ajisakaa].

> Azure has messed doc structure
> --
>
> Key: HADOOP-14491
> URL: https://issues.apache.org/jira/browse/HADOOP-14491
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14491.000.patch, new.png, old.png
>
>
> # The _WASB Secure mode and configuration_ and _Authorization Support in 
> WASB_ sections are missing in the navigation
> # _Authorization Support in WASB_ should be header level 3 instead of level 2 
> # Some of the code format is not specified
> # Sample code indent not unified.
> Let's use the auto-generated navigation instead of manually updating it, just 
> as other documents.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14035:

Fix Version/s: 2.8.2

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14035.branch-2.8.patch, 
> HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14472) Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently

2017-06-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14472:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{branch-2}} and {{trunk}} branches. Thanks for the review 
[~ste...@apache.org].

> Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently
> -
>
> Key: HADOOP-14472
> URL: https://issues.apache.org/jira/browse/HADOOP-14472
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14472.000.patch
>
>
> Reported by [HADOOP-14461]
> {code}
> testManySmallWritesWithHFlush(org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite)
>   Time elapsed: 1.051 sec  <<< FAILURE!
> java.lang.AssertionError: hflush duration of 13, less than minimum expected 
> of 20
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.writeAndReadOneFile(TestReadAndSeekPageBlobAfterWrite.java:286)
>   at 
> org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.testManySmallWritesWithHFlush(TestReadAndSeekPageBlobAfterWrite.java:247)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13048) Improvements to StatD metrics2 sink

2017-06-06 Thread Michael Moss (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039373#comment-16039373
 ] 

Michael Moss commented on HADOOP-13048:
---

Hi. Please see this comment regarding a peculiarity with the existing StatsD 
metrics2 sink:
https://issues.apache.org/jira/browse/HADOOP-12360?focusedCommentId=16036925&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16036925

It was suggested that we might revisit/take up this work on this ticket.

> Improvements to StatD metrics2 sink
> ---
>
> Key: HADOOP-13048
> URL: https://issues.apache.org/jira/browse/HADOOP-13048
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Xiao Chen
>Priority: Minor
>
> In some recent offline review of feature HADOOP-12360, [~jojochuang] has some 
> good comments. The feature is overall a nice feature, but can have some 
> improvements:
> - Validation should be more robust:
> {code}
> public void init(SubsetConfiguration conf) {
> // Get StatsD host configurations.
> final String serverHost = conf.getString(SERVER_HOST_KEY);
> final int serverPort = Integer.parseInt(conf.getString(SERVER_PORT_KEY));
> {code}
> - Javadoc should be more accurate:
> ** Inconsistency host.name v.s. hostname
> ** Could have better explanation regarding service name and process name
> - {{StatsDSink#writeMetric}} should be private.
> - Hopefully a document about this and other metric sinks.
> Thanks Wei-Chiu and [~dlmarion] for the contribution!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039363#comment-16039363
 ] 

Kihwal Lee commented on HADOOP-14035:
-

bq. only requires the change of clientBackOffEnabled from final to volatile and 
adding a setter for it.
+1. Looks good. I've verified that that and the conflict resolution in the test 
are the only change from the branch-2 patch.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14035.branch-2.8.patch, 
> HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039332#comment-16039332
 ] 

Sean Mackrory edited comment on HADOOP-14457 at 6/6/17 6:01 PM:


Also, I looked a bit deeper into the failure that is fixed by catching 
ArrayOutOfBoundsExceptions in ProvidedListStatusIterator.next() and throwing 
NoSuchElementException instead. ITestS3AContractGetFileStatus fails if 
-Ds3guard -Dauth is provided (which is what I started doing when I discovered 
this bug), and has done so since HADOOP-13926. That really just surfaced the 
existing incorrectness, though. The iterator should not be throwing 
ArrayOutOfBoundsException when it's out of elements.


was (Author: mackrorysd):
Also, I looked a bit deeper into the failure that is fixed by catching 
ArrayOutOfBoundsExceptions in ProvidedListStatusIterator.next() and throwing 
NoSuchElementException instead. ITestS3AContractGetFileStatus fails if 
-Ds3guard -Dauth is provided (which is what I started doing when I discovered 
this bug), and has done so since HADOOP-13926. That really just surfaced the 
existing incorrectness, though. The iterator should now be throwing 
ArrayOutOfBoundsException when it's out of elements.

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue

[jira] [Commented] (HADOOP-13145) In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.

2017-06-06 Thread Adam Kramer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039356#comment-16039356
 ] 

Adam Kramer commented on HADOOP-13145:
--

We're using Spark that is pre-built to 2.7 but I can try building Spark against 
2.8.1 when it's released to see how it goes.

> In DistCp, prevent unnecessary getFileStatus call when not preserving 
> metadata.
> ---
>
> Key: HADOOP-13145
> URL: https://issues.apache.org/jira/browse/HADOOP-13145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13145.001.patch, HADOOP-13145.003.patch, 
> HADOOP-13145-branch-2.004.patch, HADOOP-13145-branch-2.8.004.patch
>
>
> After DistCp copies a file, it calls {{getFileStatus}} to get the 
> {{FileStatus}} from the destination so that it can compare to the source and 
> update metadata if necessary.  If the DistCp command was run without the 
> option to preserve metadata attributes, then this additional 
> {{getFileStatus}} call is wasteful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14035:
-
Attachment: HADOOP-14035.branch-2.8.patch

The entire refresh patch isn't needed.  2.8 version of this patch only requires 
the change of clientBackOffEnabled from final to volatile and adding a setter 
for it.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14035.branch-2.8.patch, 
> HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14494) ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store undefined

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039338#comment-16039338
 ] 

Hadoop QA commented on HADOOP-14494:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14494 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871642/HADOOP-14494-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8d20a60c45b4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19ef3a8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12454/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12454/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store 
> undefined
> ---
>
> Key: HADOOP-14494
> URL: https://issues.apache.org/jira/browse/HADOOP-14494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14494-001.patch
>
>
> the move to junit 4 causes the {{.ITestJets3tNativeS3FileSystemContract} 
> tests to NPE in teardown if you don't actually declare an s3n test bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsub

[jira] [Created] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-06 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14495:
--

 Summary: Add set options interface to FSDataOutputStreamBuilder 
 Key: HADOOP-14495
 URL: https://issues.apache.org/jira/browse/HADOOP-14495
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039332#comment-16039332
 ] 

Sean Mackrory commented on HADOOP-14457:


Also, I looked a bit deeper into the failure that is fixed by catching 
ArrayOutOfBoundsExceptions in ProvidedListStatusIterator.next() and throwing 
NoSuchElementException instead. ITestS3AContractGetFileStatus fails if 
-Ds3guard -Dauth is provided (which is what I started doing when I discovered 
this bug), and has done so since HADOOP-13926. That really just surfaced the 
existing incorrectness, though. The iterator should now be throwing 
ArrayOutOfBoundsException when it's out of elements.

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2017-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039333#comment-16039333
 ] 

Steve Loughran commented on HADOOP-14394:
-


I'd like the "create parent dir" to be optional, to stop people expecting it. 
This can be done in the base impl with a check for the dir existing & being a 
dir before calling the other create(yes, it's non-atomic, so is the standard 
implementation of check & create). 

As I've noted before I'd prefer use string values to set bool/numeric options. 

Mybe we would need to be able to declare whether an option was "optional", or a 
mandatory setting, eg.

builder = createFile(path)
builder.mandatory("encryption", true)
builder.opt("erasure-coding", false)
builder.opt("fadvise", "random")
builder.mandatory("hsync", true) // better support real hsync, not pretend to

# lets client code be out of sync with what's shipping on their classpath.
# stops clients needing to know exactly which FS client they've got. All you 
need to know is there's a builder you can set things on. Setting an 
unsupported/not-yet-supported option isn't a problem, it is if you make it 
mandatory.
# allows clients to code against a stream without needing the relevant JARs on 
the CP at compile time, or hard code for a specific FS.
# I believe it could help some of the filtering filesystems return different 
clients from different paths, without the client needing to try casting it to 
different instances to eventually get one which supports the option game.
# could also let filter filesystems inject their own options in between. I 
can't imagine they would, but they could.



h3. {{TestDistributedFileSystem}}

MiniDFSCluster is autocloseable, you can use it like

{code}
try(MiniDFSCluster cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(1).build();) {
 cluster.waitActive();}
 ...
 }
{code}




> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ

2017-06-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039322#comment-16039322
 ] 

Daryn Sharp commented on HADOOP-14146:
--

[~drankye], have you had a chance to review?

> KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
> 
>
> Key: HADOOP-14146
> URL: https://issues.apache.org/jira/browse/HADOOP-14146
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14146.1.patch, HADOOP-14146.2.patch, 
> HADOOP-14146.patch
>
>
> Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add 
> multiple SPN host and/or realm support to spnego authentication.  The basic 
> problem is the server tries to guess and/or brute force what SPN the client 
> used.  The server should just decode the SPN from the AP-REQ.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2017-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039316#comment-16039316
 ] 

Steve Loughran commented on HADOOP-14394:
-

hey catching up on this; going through the JIRAs & will comment on it together. 
There's no JIRA on setOptions(), is there? I

> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14035:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Not committed to 2.8 since it is 
dependent on HDFS-10207, which is not in 2.8.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14494) ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store undefined

2017-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14494:

Status: Patch Available  (was: Open)

> ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store 
> undefined
> ---
>
> Key: HADOOP-14494
> URL: https://issues.apache.org/jira/browse/HADOOP-14494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14494-001.patch
>
>
> the move to junit 4 causes the {{.ITestJets3tNativeS3FileSystemContract} 
> tests to NPE in teardown if you don't actually declare an s3n test bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14494) ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store undefined

2017-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14494:

Attachment: HADOOP-14494-001.patch

testing: S3 ireland, without an S3n test key (i.e, I'm only testing s3a code; 
this test is expected to skip everything, which it does successfully once this 
patch is in)

> ITestJets3tNativeS3FileSystemContract tests NPEs in teardown if store 
> undefined
> ---
>
> Key: HADOOP-14494
> URL: https://issues.apache.org/jira/browse/HADOOP-14494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14494-001.patch
>
>
> the move to junit 4 causes the {{.ITestJets3tNativeS3FileSystemContract} 
> tests to NPE in teardown if you don't actually declare an s3n test bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14488) s34guard localdynamo listStatus fails after renaming file into directory

2017-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039288#comment-16039288
 ] 

Steve Loughran commented on HADOOP-14488:
-

Fails with same error (and after rebasing committer branch onto latest merged 
s3guard branch)
{code}
ble hwdev-steve-new in region us-west-1: 
s3a://hwdev-steve-new/cloud-integration/DELAY_LISTING_ME/S3AConsistencySuite
- commit *** FAILED ***
  java.lang.IllegalArgumentException: childPath 
s3a://hwdev-steve-new/cloud-integration/DELAY_LISTING_ME/S3AConsistencySuite/work/task00/part-00
 must be a child of 
s3a://hwdev-steve-new/cloud-integration/DELAY_LISTING_ME/S3AConsistencySuite/work
  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:383)
  at 
org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata.childStatusToPathKey(DirListingMetadata.java:299)
  at 
org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata.put(DirListingMetadata.java:221)
  at org.apache.hadoop.fs.s3a.s3guard.S3Guard.dirListingUnion(S3Guard.java:223)
  at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1688)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1640)
  at 
com.hortonworks.spark.cloud.s3.S3AConsistencySuite$$anonfun$7.apply$mcV$sp(S3AConsistencySuite.scala:83)
  at 
com.hortonworks.spark.cloud.CloudSuiteTrait$$anonfun$ctest$1.apply$mcV$sp(CloudSuiteTrait.scala:62)
  at 
com.hortonworks.spark.cloud.CloudSuiteTrait$$anonfun$ctest$1.apply(CloudSuiteTrait.scala:60)
  at 
com.hortonworks.spark.cloud.CloudSuiteTrait$$anonfun$ctest$1.apply(CloudSuiteTrait.scala:60)
  ...
{code}



> s34guard localdynamo listStatus fails after renaming file into directory
> 
>
> Key: HADOOP-14488
> URL: https://issues.apache.org/jira/browse/HADOOP-14488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Blocker
>
> Running scala integration test with inconsistent s3 client & local DDB enabled
> {code}
> fs.rename("work/task-00/part-00", work)
> fs.listStatus(work)
> {code}
> The list status work fails with a message about the childStatus not being a 
> child of the parent. 
> Hypothesis: rename isn't updating the child path entry



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039277#comment-16039277
 ] 

Kihwal Lee commented on HADOOP-14035:
-

+1 for the branch-2 patch.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039225#comment-16039225
 ] 

Hadoop QA commented on HADOOP-14035:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
58s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 295 unchanged - 1 fixed = 297 total (was 296) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
48s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871620/HADOOP-14035.branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf1b1265381f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / e889c82 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12453/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| JDK v1.7.0_131  Test Results | 
http

[jira] [Commented] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table name configured

2017-06-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039163#comment-16039163
 ] 

Mingliang Liu commented on HADOOP-14433:


+1

> ITestS3GuardConcurrentOps.testConcurrentTableCreations fails without table 
> name configured
> --
>
> Key: HADOOP-14433
> URL: https://issues.apache.org/jira/browse/HADOOP-14433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-14433-HADOOP-13345.001.patch
>
>
> test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 
> -Ddynamodblocal -Ds3guard}} failing
> {code}
> Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, 
> Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in 
> org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, 
> Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 9.744 sec  <<< ERROR! java.lang.IllegalArgumentException: No 
> DynamoDB table name configured!
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14485) Redundant 'final' modifier in try-with-resources statement

2017-06-06 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039124#comment-16039124
 ] 

wenxin he commented on HADOOP-14485:


Thanks [~brahmareddy] and [~templedf]. I appreciate it.

> Redundant 'final' modifier in try-with-resources statement
> --
>
> Key: HADOOP-14485
> URL: https://issues.apache.org/jira/browse/HADOOP-14485
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Assignee: wenxin he
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14485.001.patch
>
>
> Redundant 'final' modifier in the try-with-resources statement. Any variable 
> declared in the try-with-resources statement is implicitly modified with 
> final.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14485) Redundant 'final' modifier in try-with-resources statement

2017-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039103#comment-16039103
 ] 

Hudson commented on HADOOP-14485:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11830 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11830/])
HADOOP-14485. Redundant 'final' modifier in try-with-resources (brahma: rev 
19ef3a81f8b90579b4a7a95839d0c3ebdd56349c)
* (edit) 
hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestRollingAverages.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java


> Redundant 'final' modifier in try-with-resources statement
> --
>
> Key: HADOOP-14485
> URL: https://issues.apache.org/jira/browse/HADOOP-14485
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Assignee: wenxin he
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14485.001.patch
>
>
> Redundant 'final' modifier in the try-with-resources statement. Any variable 
> declared in the try-with-resources statement is implicitly modified with 
> final.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-06-06 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14035:
-
Attachment: HADOOP-14035.branch-2.patch

Apparently it's a java 7 vs 8 issue.  Changed {{anyObject()}} to 
{{any(Schedulable.class)}}.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14035.branch-2.patch, HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >