[jira] [Commented] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-05 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999317#comment-15999317
 ] 

Li Lu commented on HADOOP-14383:


Thanks for the work [~wheat9]! This is useful. The previous Jenkins run reveals 
several findbugs warnings but they appear to be existing ones in our codebase. 
+1 pending Jenkins. 

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-14383.000.patch, HADOOP-14383.001.patch
>
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14383:

Attachment: HADOOP-14383.001.patch

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-14383.000.patch, HADOOP-14383.001.patch
>
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14387:
-
Target Version/s: 3.0.0-alpha3

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14387.1.patch
>
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14392) Memory leak issue in Zlib compressor

2017-05-05 Thread Jimmy Ouyang (JIRA)
Jimmy Ouyang created HADOOP-14392:
-

 Summary: Memory leak issue in Zlib compressor
 Key: HADOOP-14392
 URL: https://issues.apache.org/jira/browse/HADOOP-14392
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.6.0
Reporter: Jimmy Ouyang
Priority: Critical


While using Hadoop-2.6.0 and Hadoop-3.0, we noticed in 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java,
 there is memory leak due to missing calls to compressor.end() in function 
close(). compressor.end() function calls zlib native function deflateEnd(), in 
which zlib buffers are freed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999238#comment-15999238
 ] 

Hadoop QA commented on HADOOP-14180:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-tools/hadoop-azure in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} root: The patch generated 0 new + 85 unchanged - 14 
fixed = 85 total (was 99) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 37s{color} 
| {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
43s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.datano

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999192#comment-15999192
 ] 

Sean Busbey commented on HADOOP-14284:
--

{quote}
I thought that this might be a problem because the thirdparty jar is not on the 
classpath at testing time, Please let me know if I'm missing something.
{quote}

This is because the shading relocation doesn't happen until the package phase, 
which is after the test phase. Maven goes through a given phase for all modules 
in a reactor before doing the next one. If you look at the integration tests 
for the shaded client, you can see where the compilation of test classes there 
are moved to a new cycle after packaging has already shaded everything.

For this, you'd either have to move the shading for the third party jar to be 
before the normal compile phase or we'd need it in a different repo with its 
own build cycle.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999176#comment-15999176
 ] 

Junping Du commented on HADOOP-14284:
-

Instead of shade guava everywhere, do we consider the option to remove guava 
completely in our code base? I like the convenience we gain in dev stage, but 
really hate the trouble in maintaining/releases given poor compatibility across 
versions. Shall we just remove it just like what other projects (like 
ElasticSearch, etc.) did before? 

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-05 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999158#comment-15999158
 ] 

Xiaobing Zhou commented on HADOOP-14180:


I posted v0 patch for reviews, thx.

> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Attachments: HADOOP-14180.000.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-14180:
---
Attachment: HADOOP-14180.000.patch

> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Attachments: HADOOP-14180.000.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-05 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-14180:
---
Status: Patch Available  (was: Open)

> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Attachments: HADOOP-14180.000.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere

2017-05-05 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999058#comment-15999058
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-14284 at 5/5/17 10:52 PM:
--

-After making shadeTestJar true in  hadoop-shaded-thirdparty jar, it works! 
Attaching patch soon.-

This was wrong. It still had same error message unfortunately.


was (Author: ozawa):
Oh, after making shadeTestJar true in  hadoop-shaded-thirdparty jar, it works! 
Attaching patch soon.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-05 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999058#comment-15999058
 ] 

Tsuyoshi Ozawa commented on HADOOP-14284:
-

Oh, after making shadeTestJar true in  hadoop-shaded-thirdparty jar, it works! 
Attaching patch soon.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-05 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999022#comment-15999022
 ] 

Tsuyoshi Ozawa commented on HADOOP-14284:
-

[~stack] thanks you for your help.

{quote}
Tell us more please. Shading bundles the relocated .class files of guava and 
curator; they are included in the thirdparty jar... and the thirdparty jar is 
on the classpath, no?
{quote}

Yes, they are. This is a list of the contents of hadoop-shaded-thirdparty: 
https://gist.github.com/oza/62fdea66a55c86eda02d2a8530058153
The list shows that shading works well. As a result, mvn install -DskipTests 
succeeds with the patch: it means the thirdparty jar is on the classpath at 
compile time.

On the other hand, mvn test of hadoop-auth fails with the patch although I 
added dependency on hadoop-shaded-thirdparty in compile scope.

https://github.com/oza/hadoop/blob/HADOOP-14284/hadoop-common-project/hadoop-auth/pom.xml#L46

the result of mvn test is as follows:
{quote}
Running 
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator
Tests run: 20, Failures: 5, Errors: 11, Skipped: 0, Time elapsed: 23.407 sec 
<<< FAILURE! - in 
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator
testNotAuthenticatedWithMultiAuthHandler\[0\](org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
  Time elapsed: 1.649 sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/com/google/common/base/Splitter
{quote}

I thought that this might be a problem because the thirdparty jar is not on the 
classpath at testing time,  Please let me know if I'm missing something.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14376) Memory leak when reading a compressed file using the native library

2017-05-05 Thread Eli Acherkan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Acherkan updated HADOOP-14376:
--
Attachment: HADOOP-14376.001.patch

Patch attached. First time contributor, I hope I followed the guidelines 
correctly.

For testing, I enhanced an existing unit test - TestCodec.codecTest(), since 
it's already invoked for different types of native and pure-Java codecs. I 
added an assertion that the number of leased decompressors after the test 
equals to the one before it. This exposed a similar bug in 
BZip2Codec.BZip2CompressionInputStream.close(), which also doesn't call its 
super.close() method, and thus doesn't return the decompressor to the pool.

Adding an assertion for compressors as well as decompressors uncovered a 
similar issue in CompressorStream.close(), GzipCodec.GzipOutputStream.close(), 
and BZip2Codec.BZip2CompressionOutputStream.close(), which I attempted to fix 
as well.

Regarding BZip2Codec.BZip2CompressionOutputStream.close(), I removed the 
overriding method altogether, because the superclass's close() method invokes 
finish(). The finish() method handles internalReset() if needed, and also calls 
output.finish(), which eliminates the need to call output.flush() or 
output.close().

Testing GzipCodec without native libraries showed that CodecPool erroneously 
calls updateLeaseCounts even for compressors/decompressors that are null, or 
ones with the @DoNotPool annotation. I added a condition that checks for that.

The memory leak only manifests when using the native libraries. In Eclipse I 
achieved this by setting java.library.path in the unit test launcher. Seeing 
the usage of assumeTrue(isNative*Loaded()), I understand that native-related 
tests are covered in Maven builds as well.

Looking forward to a code review.

> Memory leak when reading a compressed file using the native library
> ---
>
> Key: HADOOP-14376
> URL: https://issues.apache.org/jira/browse/HADOOP-14376
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, io
>Affects Versions: 2.7.0
>Reporter: Eli Acherkan
>Assignee: Eli Acherkan
> Attachments: Bzip2MemoryTester.java, HADOOP-14376.001.patch, 
> log4j.properties
>
>
> Opening and closing a large number of bzip2-compressed input streams causes 
> the process to be killed on OutOfMemory when using the native bzip2 library.
> Our initial analysis suggests that this can be caused by 
> {{DecompressorStream}} overriding the {{close()}} method, and therefore 
> skipping the line from its parent: 
> {{CodecPool.returnDecompressor(trackedDecompressor)}}. When the decompressor 
> object is a {{Bzip2Decompressor}}, its native {{end()}} method is never 
> called, and the allocated memory isn't freed.
> If this analysis is correct, the simplest way to fix this bug would be to 
> replace {{in.close()}} with {{super.close()}} in {{DecompressorStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14387:
-
Attachment: HADOOP-14387.1.patch

[~ste...@apache.org], included a patch that includes a better error message as 
well as allowing for URL includes which is as known problem. Can you comment on 
whether this fixes the issue?

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14387.1.patch
>
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14391) s3a: auto-detect region for bucket and use right endpoint

2017-05-05 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-14391:
-

Assignee: Aaron Fabbri

> s3a: auto-detect region for bucket and use right endpoint
> -
>
> Key: HADOOP-14391
> URL: https://issues.apache.org/jira/browse/HADOOP-14391
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Specifying the S3A endpoint ({{fs.s3a.endpoint}}) is
> - *required* for regions which only support v4 authentication
> - A good practice for all regions.
> The user experience of having to configure endpoints is not great.  Often it 
> is neglected and leads to additional cost, reduced performance, or failures 
> for v4 auth regions.
> I want to explore an option which, when enabled, auto-detects the region for 
> an s3 bucket and uses the proper endpoint.  Not sure if this is possible or 
> anyone has looked into it yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998855#comment-15998855
 ] 

Anu Engineer edited comment on HADOOP-14384 at 5/5/17 8:47 PM:
---

bq. Please do not refer to anyone's code as "badly executed", particularly that 
of a new contributor. 

My apologies, It was not intended to cause any hurt, I am sorry if it came 
across that way to you.  My reference to "good concept badly executed" was to 
make sure that I convey the understanding that I would like this to stay in 
code base, but we should make whatever fixes are needed to make it ship worthy. 
 This JIRA was confusing since  the only sub-task so far is to make it private. 
So I was trying to understand where we are going with it.

bq.Since these changes affect compatibility we want to get the details right, 
but the patch and the idea are still largely good.
Marking this API as private did not seem to be a step towards that goal, hence 
my query. My query was precisely to understand what is the long term fix. The 
parent jira has a plan but this specific JIRA was/is very light on details on 
how that is being achieved.
 
[~eddyxu] I would think that doing option 2 might make what we are doing in 
this JIRA superfluous,   Would it not be easier to follow thru with that plan, 
if that is what is being contemplated ? 




was (Author: anu):
bq. Please do not refer to anyone's code as "badly executed", particularly that 
of a new contributor. 

My apologies, It was not intended to cause any hurt, I am sorry if it came 
across that way to you.  My reference to "good concept badly executed" was to 
make sure that I convey the understanding that I would like this to stay in 
code base, but we should make whatever fixes are needed to make it ship worthy. 
 This JIRA was confusing since only the sub-task so far is to make it private. 
So I was trying to understand where we are going with it.

bq.Since these changes affect compatibility we want to get the details right, 
but the patch and the idea are still largely good.
Marking this API as private did not seem to be a step towards that goal, hence 
my query. My query was precisely to understand what is the long term fix. The 
parent jira has a plan but this specific JIRA was/is very light on details on 
how that is being achieved.
 
[~eddyxu] I would think that doing option 2 might make what we are doing in 
this JIRA superfluous,   Would it not be easier to follow thru with that plan, 
if that is what is being contemplated ? 



> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13372) MR jobs can not access Swift filesystem if Kerberos is enabled

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1599#comment-1599
 ] 

Hadoop QA commented on HADOOP-13372:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HADOOP-13372 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1286/HADOOP-13372.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12254/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MR jobs can not access Swift filesystem if Kerberos is enabled
> --
>
> Key: HADOOP-13372
> URL: https://issues.apache.org/jira/browse/HADOOP-13372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/swift, security
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-13372.001.patch, HADOOP-13372.002.patch
>
>
> {code}
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:262)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:303)
> at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:524)
> at 
> org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> Caused by: java.net.UnknownHostException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13372) MR jobs can not access Swift filesystem if Kerberos is enabled

2017-05-05 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-13372:

Attachment: HADOOP-13372.002.patch

Moved the test to TestSwiftFileSystemBasicOps class and used Assume instead of 
Assert.

> MR jobs can not access Swift filesystem if Kerberos is enabled
> --
>
> Key: HADOOP-13372
> URL: https://issues.apache.org/jira/browse/HADOOP-13372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/swift, security
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-13372.001.patch, HADOOP-13372.002.patch
>
>
> {code}
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:262)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:303)
> at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:524)
> at 
> org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> Caused by: java.net.UnknownHostException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14390) Correct spelling of 'succeed' and variants

2017-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998886#comment-15998886
 ] 

Hudson commented on HADOOP-14390:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11692 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11692/])
HADOOP-14390. Correct spelling of 'succeed' and variants. Contributed by 
(cdouglas: rev e4f34ecb049a252fb1084c4c7f404d710b221969)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestAlignedPlanner.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBenchWithoutMR.java


> Correct spelling of 'succeed' and variants
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Assignee: Dongtao Zhang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14390.002.patch, HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14353) add an SSE-KMS scale test to see if you can overload the keystore in random IO

2017-05-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998880#comment-15998880
 ] 

Aaron Fabbri commented on HADOOP-14353:
---

FYI [~moist].

> add an SSE-KMS scale test to see if you can overload the keystore in random IO
> --
>
> Key: HADOOP-14353
> URL: https://issues.apache.org/jira/browse/HADOOP-14353
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Maybe add an optional IT test to aggressively seek on an SSE-KMS test file to 
> see if it can overload the KMS infra. The [default 
> limit|http://docs.aws.amazon.com/kms/latest/developerguide/limits.html] is 
> 600 requests/second. This may seem a lot, but with random IO, every new HTTPS 
> request in the chain is potentially triggering a new operation. 
> Someone should see what happens: how easy is it to create, and what is the 
> error message.
> This may not be something we can trigger in a simple IT test, just because 
> it's single host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13238) pid handling is failing on secure datanode

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998874#comment-15998874
 ] 

Hadoop QA commented on HADOOP-13238:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  2m 
26s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1282/HADOOP-13238.02.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 5c687d1183fd 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e4f34ec |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12253/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12253/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> pid handling is failing on secure datanode
> --
>
> Key: HADOOP-13238
> URL: https://issues.apache.org/jira/browse/HADOOP-13238
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, security
>Reporter: Allen Wittenauer
>Assignee: Andras Bokor
> Attachments: HADOOP-13238.01.patch, HADOOP-13238.02.patch
>
>
> {code}
> hdfs --daemon stop datanode
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: pid has changed for datanode, skip deleting pid file
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: daemon pid has changed for datanode, skip deleting daemon pid file
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998875#comment-15998875
 ] 

Hadoop QA commented on HADOOP-14343:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  2m 
43s{color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 
74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14343 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1283/HADOOP-14343.02.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 62b471e10b04 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e4f34ec |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12252/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12252/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Wrong pid file name in error message when starting secure daemon
> 
>
> Key: HADOOP-14343
> URL: https://issues.apache.org/jira/browse/HADOOP-14343
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14343.01.patch, HADOOP-14343.02.patch
>
>
> {code}# this is for the daemon pid creation
>   #shellcheck disable=SC2086
>   echo $! > "${jsvcpidfile}" 2>/dev/null
>   if [[ $? -gt 0 ]]; then
> hadoop_error "ERROR:  Cannot write ${daemonname} pid ${daemonpidfile}."
>   fi{code}
> It will log datanode's pid file instead of JSVC's pid file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998863#comment-15998863
 ] 

Jonathan Eagles commented on HADOOP-14387:
--

{code}
java.lang.RuntimeException: java.io.IOException: Fetch fail on include with no 
fallback while loading 'core-site.xml'
{code}
This message looks to be saying that while parsing core-site.xml, it was unable 
to xinclude a file and there was no fallback. Can you confirm this case, 
[~ste...@apache.org]?

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Assignee: Jonathan Eagles
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998855#comment-15998855
 ] 

Anu Engineer edited comment on HADOOP-14384 at 5/5/17 7:43 PM:
---

bq. Please do not refer to anyone's code as "badly executed", particularly that 
of a new contributor. 

My apologies, It was not intended to cause any hurt, I am sorry if it came 
across that way to you.  My reference to "good concept badly executed" was to 
make sure that I convey the understanding that I would like this to stay in 
code base, but we should make whatever fixes are needed to make it ship worthy. 
 This JIRA was confusing since only the sub-task so far is to make it private. 
So I was trying to understand where we are going with it.

bq.Since these changes affect compatibility we want to get the details right, 
but the patch and the idea are still largely good.
Marking this API as private did not seem to be a step towards that goal, hence 
my query. My query was precisely to understand what is the long term fix. The 
parent jira has a plan but this specific JIRA was/is very light on details on 
how that is being achieved.
 
[~eddyxu] I would think that doing option 2 might make what we are doing in 
this JIRA superfluous,   Would it not be easier to follow thru with that plan, 
if that is what is being contemplated ? 




was (Author: anu):
bq. Please do not refer to anyone's code as "badly executed", particularly that 
of a new contributor. 

My apologies, It was not intended to cause any hurt, I am sorry if it came 
across that way to you.  My reference to "good concept badly executed" was to 
make sure that I convey the understanding that I would like this to stay in 
code base, but we should make whatever fixes are needed to make it ship worthy. 
 This JIRA was confusing since only the sub-task so far is to make it private. 
So I was trying to understand where we are going with it.

bq.Since these changes affect compatibility we want to get the details right, 
but the patch and the idea are still largely good.
Marking this API as private did not seem to be a step towards that goal, hence 
my query. My query was precisely to understand what is the long term fix. The 
parent jira and this specific JIRA was/is very light on details.
 
[~eddyxu] I would think that doing option 2 might make what we are doing in 
this JIRA superfluous,   Would it not be easier to follow thru with that plan, 
if that is what is being contemplated ? 



> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998859#comment-15998859
 ] 

Hadoop QA commented on HADOOP-14389:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common-project/hadoop-auth in trunk has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-common-project/hadoop-auth: The patch 
generated 0 new + 0 unchanged - 8 fixed = 0 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14389 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866648/HADOOP-14389.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9933f97495d4 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4e6bbd0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12250/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-auth-warnings.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12250/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12250/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12250/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Exception handling is incorrect in KerberosName.java
> -

[jira] [Comment Edited] (HADOOP-14391) s3a: auto-detect region for bucket and use right endpoint

2017-05-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998852#comment-15998852
 ] 

Aaron Fabbri edited comment on HADOOP-14391 at 5/5/17 7:39 PM:
---

There appears to be [an 
API|http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#getBucketLocation-com.amazonaws.services.s3.model.GetBucketLocationRequest-]
  to query bucket location.

I suppose we could:

- Introduce a new config flag, say, {{fs.s3a.auto.set.endpoint}} (or we could 
use a special value "auto" for {{fs.s3a.endpoint}}.)
- When enabled, at init time, use {{AmazonS3#getBucketLocation(..)}}, 
{{Region#fromValue()}}, then build the endpoint name 
{{s3-$\{REGION\}.amazonaws.com}} and set it.




was (Author: fabbri):
There appears to be [an 
API|http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#getBucketLocation-com.amazonaws.services.s3.model.GetBucketLocationRequest-]
  to query bucket location.

I suppose we could:

- Introduce a new config flag, say, {{fs.s3a.auto.set.endpoint}} (or we could 
use a special value "auto" for {{fs.s3a.endpoint}}.)
- When enabled, at init time, use {{AmazonS3#getBucketLocation(..)}}, 
{{Region#fromValue()}}, then build the endpoint name s3-${REGION}.amazonaws.com 
and set it.



> s3a: auto-detect region for bucket and use right endpoint
> -
>
> Key: HADOOP-14391
> URL: https://issues.apache.org/jira/browse/HADOOP-14391
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Aaron Fabbri
>
> Specifying the S3A endpoint ({{fs.s3a.endpoint}}) is
> - *required* for regions which only support v4 authentication
> - A good practice for all regions.
> The user experience of having to configure endpoints is not great.  Often it 
> is neglected and leads to additional cost, reduced performance, or failures 
> for v4 auth regions.
> I want to explore an option which, when enabled, auto-detects the region for 
> an s3 bucket and uses the proper endpoint.  Not sure if this is possible or 
> anyone has looked into it yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998855#comment-15998855
 ] 

Anu Engineer commented on HADOOP-14384:
---

bq. Please do not refer to anyone's code as "badly executed", particularly that 
of a new contributor. 

My apologies, It was not intended to cause any hurt, I am sorry if it came 
across that way to you.  My reference to "good concept badly executed" was to 
make sure that I convey the understanding that I would like this to stay in 
code base, but we should make whatever fixes are needed to make it ship worthy. 
 This JIRA was confusing since only the sub-task so far is to make it private. 
So I was trying to understand where we are going with it.

bq.Since these changes affect compatibility we want to get the details right, 
but the patch and the idea are still largely good.
Marking this API as private did not seem to be a step towards that goal, hence 
my query. My query was precisely to understand what is the long term fix. The 
parent jira and this specific JIRA was/is very light on details.
 
[~eddyxu] I would think that doing option 2 might make what we are doing in 
this JIRA superfluous,   Would it not be easier to follow thru with that plan, 
if that is what is being contemplated ? 



> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14391) s3a: auto-detect region for bucket and use right endpoint

2017-05-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998852#comment-15998852
 ] 

Aaron Fabbri commented on HADOOP-14391:
---

There appears to be [an 
API|http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#getBucketLocation-com.amazonaws.services.s3.model.GetBucketLocationRequest-]
  to query bucket location.

I suppose we could:

- Introduce a new config flag, say, {{fs.s3a.auto.set.endpoint}} (or we could 
use a special value "auto" for {{fs.s3a.endpoint}}.)
- When enabled, at init time, use {{AmazonS3#getBucketLocation(..)}}, 
{{Region#fromValue()}}, then build the endpoint name s3-${REGION}.amazonaws.com 
and set it.



> s3a: auto-detect region for bucket and use right endpoint
> -
>
> Key: HADOOP-14391
> URL: https://issues.apache.org/jira/browse/HADOOP-14391
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Aaron Fabbri
>
> Specifying the S3A endpoint ({{fs.s3a.endpoint}}) is
> - *required* for regions which only support v4 authentication
> - A good practice for all regions.
> The user experience of having to configure endpoints is not great.  Often it 
> is neglected and leads to additional cost, reduced performance, or failures 
> for v4 auth regions.
> I want to explore an option which, when enabled, auto-detects the region for 
> an s3 bucket and uses the proper endpoint.  Not sure if this is possible or 
> anyone has looked into it yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998849#comment-15998849
 ] 

Hadoop QA commented on HADOOP-14229:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14229 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1281/HADOOP-14229.03.patch 
|
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 80ec0aec2f3b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e4f34ec |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12251/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop.security.auth_to_local example is incorrect in the documentation
> ---
>
> Key: HADOOP-14229
> URL: https://issues.apache.org/jira/browse/HADOOP-14229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch, 
> HADOOP-14229.03.patch
>
>
> Let's see jhs as example:
> {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code}
> That means principal has 2 components (jhs/myhost@REALM).
> The second column converts this to jhs@REALM. So the regex will not match on 
> this since regex expects / in the principal.
> My suggestion is
> {code}RULE:[2:$1](jhs)s/.*/mapred/{code}
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14391) s3a: auto-detect region for bucket and use right endpoint

2017-05-05 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14391:
-

 Summary: s3a: auto-detect region for bucket and use right endpoint
 Key: HADOOP-14391
 URL: https://issues.apache.org/jira/browse/HADOOP-14391
 Project: Hadoop Common
  Issue Type: Improvement
  Components: s3
Affects Versions: 3.0.0-alpha2
Reporter: Aaron Fabbri


Specifying the S3A endpoint ({{fs.s3a.endpoint}}) is

- *required* for regions which only support v4 authentication
- A good practice for all regions.

The user experience of having to configure endpoints is not great.  Often it is 
neglected and leads to additional cost, reduced performance, or failures for v4 
auth regions.

I want to explore an option which, when enabled, auto-detects the region for an 
s3 bucket and uses the proper endpoint.  Not sure if this is possible or anyone 
has looked into it yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998829#comment-15998829
 ] 

Andrew Wang commented on HADOOP-14384:
--

Hi Anu,

Please do not refer to anyone's code as "badly executed", particularly that of 
a new contributor. This isn't how we grow the community, and this API went 
through review by myself and Stack, among others.

Let's try and focus on the positives. Steve phrased his feedback as making a 
few changes, not a rework. And, as I think everyone agrees, the idea of a 
create builder is good. Since these changes affect compatibility we want to get 
the details right, but the patch and the idea are still largely good.

Thanks,
Andrew

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14343:
--
Attachment: (was: HADOOP-14343.02.patch)

> Wrong pid file name in error message when starting secure daemon
> 
>
> Key: HADOOP-14343
> URL: https://issues.apache.org/jira/browse/HADOOP-14343
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14343.01.patch, HADOOP-14343.02.patch
>
>
> {code}# this is for the daemon pid creation
>   #shellcheck disable=SC2086
>   echo $! > "${jsvcpidfile}" 2>/dev/null
>   if [[ $? -gt 0 ]]; then
> hadoop_error "ERROR:  Cannot write ${daemonname} pid ${daemonpidfile}."
>   fi{code}
> It will log datanode's pid file instead of JSVC's pid file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14343:
--
Attachment: HADOOP-14343.02.patch

Reattach last patch to kick [~hadoopqa].

> Wrong pid file name in error message when starting secure daemon
> 
>
> Key: HADOOP-14343
> URL: https://issues.apache.org/jira/browse/HADOOP-14343
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14343.01.patch, HADOOP-14343.02.patch
>
>
> {code}# this is for the daemon pid creation
>   #shellcheck disable=SC2086
>   echo $! > "${jsvcpidfile}" 2>/dev/null
>   if [[ $? -gt 0 ]]; then
> hadoop_error "ERROR:  Cannot write ${daemonname} pid ${daemonpidfile}."
>   fi{code}
> It will log datanode's pid file instead of JSVC's pid file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13238:
--
Attachment: HADOOP-13238.02.patch

Reattach last patch to kick [~hadoopqa].

> pid handling is failing on secure datanode
> --
>
> Key: HADOOP-13238
> URL: https://issues.apache.org/jira/browse/HADOOP-13238
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, security
>Reporter: Allen Wittenauer
>Assignee: Andras Bokor
> Attachments: HADOOP-13238.01.patch, HADOOP-13238.02.patch
>
>
> {code}
> hdfs --daemon stop datanode
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: pid has changed for datanode, skip deleting pid file
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: daemon pid has changed for datanode, skip deleting daemon pid file
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13238:
--
Attachment: (was: HADOOP-13238.02.patch)

> pid handling is failing on secure datanode
> --
>
> Key: HADOOP-13238
> URL: https://issues.apache.org/jira/browse/HADOOP-13238
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, security
>Reporter: Allen Wittenauer
>Assignee: Andras Bokor
> Attachments: HADOOP-13238.01.patch, HADOOP-13238.02.patch
>
>
> {code}
> hdfs --daemon stop datanode
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: pid has changed for datanode, skip deleting pid file
> cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or 
> directory
> WARNING: daemon pid has changed for datanode, skip deleting daemon pid file
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14229:
--
Attachment: HADOOP-14229.03.patch

Reattach last patch to kick [~hadoopqa]

> hadoop.security.auth_to_local example is incorrect in the documentation
> ---
>
> Key: HADOOP-14229
> URL: https://issues.apache.org/jira/browse/HADOOP-14229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch, 
> HADOOP-14229.03.patch
>
>
> Let's see jhs as example:
> {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code}
> That means principal has 2 components (jhs/myhost@REALM).
> The second column converts this to jhs@REALM. So the regex will not match on 
> this since regex expects / in the principal.
> My suggestion is
> {code}RULE:[2:$1](jhs)s/.*/mapred/{code}
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14229:
--
Attachment: (was: HADOOP-14229.03.patch)

> hadoop.security.auth_to_local example is incorrect in the documentation
> ---
>
> Key: HADOOP-14229
> URL: https://issues.apache.org/jira/browse/HADOOP-14229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch
>
>
> Let's see jhs as example:
> {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code}
> That means principal has 2 components (jhs/myhost@REALM).
> The second column converts this to jhs@REALM. So the regex will not match on 
> this since regex expects / in the principal.
> My suggestion is
> {code}RULE:[2:$1](jhs)s/.*/mapred/{code}
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14390) Wrong spelling ”suceed“ should be changed to ”succeed“

2017-05-05 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14390:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

+1 I committed this. Thanks Dongtao

> Wrong spelling ”suceed“ should be changed to ”succeed“
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Assignee: Dongtao Zhang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14390.002.patch, HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14390) Correct spelling of 'succeed' and variants

2017-05-05 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14390:
---
Summary: Correct spelling of 'succeed' and variants  (was: Wrong spelling 
”suceed“ should be changed to ”succeed“)

> Correct spelling of 'succeed' and variants
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Assignee: Dongtao Zhang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14390.002.patch, HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14390) Wrong spelling ”suceed“ should be changed to ”succeed“

2017-05-05 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14390:
---
Attachment: HADOOP-14390.002.patch

This correction is still incorrect:
{noformat}
-  // start new instance of secondary and verify that 
-  // a new rollEditLog suceedes inspite of the fact that 
+  // start new instance of secondary and verify that
+  // a new rollEditLog succeedes inspite of the fact that
{noformat}
Fixed in 002.

> Wrong spelling ”suceed“ should be changed to ”succeed“
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Priority: Trivial
> Attachments: HADOOP-14390.002.patch, HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14390) Wrong spelling ”suceed“ should be changed to ”succeed“

2017-05-05 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reassigned HADOOP-14390:
--

Assignee: Dongtao Zhang

> Wrong spelling ”suceed“ should be changed to ”succeed“
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Assignee: Dongtao Zhang
>Priority: Trivial
> Attachments: HADOOP-14390.002.patch, HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-14390) Wrong spelling ”suceed“ should be changed to ”succeed“

2017-05-05 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas moved HDFS-11750 to HADOOP-14390:
---

Affects Version/s: (was: 3.0.0-alpha2)
  Key: HADOOP-14390  (was: HDFS-11750)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Wrong spelling ”suceed“ should be changed to ”succeed“
> --
>
> Key: HADOOP-14390
> URL: https://issues.apache.org/jira/browse/HADOOP-14390
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dongtao Zhang
>Priority: Trivial
> Attachments: HDFS-11750-v001.patch
>
>
> Wrong spelling ”suceed“ should be changed to ”succeed“.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14356) Update CHANGES.txt to reflect all the changes in branch-2.7

2017-05-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998765#comment-15998765
 ] 

Konstantin Shvachko edited comment on HADOOP-14356 at 5/5/17 6:51 PM:
--

The patch looks good to me as well, modular what [~ajisakaa] said.
Thanks for tracking this [~brahmareddy].
May be we should keep it open in case other inconsistencies with CHANGES.txt 
emerge. Make it the last commit before the release?


was (Author: shv):
The patch looks good to me as well, modular what [~ajisakaa] said.
Thanks for tracking this [~brahmareddy].
May be we should we keep it open in case other inconsistencies with CHANGES.txt 
emerge. Make it the last commit before the release?

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-14356
> URL: https://issues.apache.org/jira/browse/HADOOP-14356
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-14356-002.patch, HADOOP-14356.patch
>
>
> Following jira's are not updated in {{CHANGES.txt}}
> HADOOP-14066,HDFS-11608,HADOOP-14293,HDFS-11628,YARN-6274,YARN-6152,HADOOP-13119,HDFS-10733,HADOOP-13958,HDFS-11280,YARN-6024



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14356) Update CHANGES.txt to reflect all the changes in branch-2.7

2017-05-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998765#comment-15998765
 ] 

Konstantin Shvachko commented on HADOOP-14356:
--

The patch looks good to me as well, modular what [~ajisakaa] said.
Thanks for tracking this [~brahmareddy].
May be we should we keep it open in case other inconsistencies with CHANGES.txt 
emerge. Make it the last commit before the release?

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-14356
> URL: https://issues.apache.org/jira/browse/HADOOP-14356
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-14356-002.patch, HADOOP-14356.patch
>
>
> Following jira's are not updated in {{CHANGES.txt}}
> HADOOP-14066,HDFS-11608,HADOOP-14293,HDFS-11628,YARN-6274,YARN-6152,HADOOP-13119,HDFS-10733,HADOOP-13958,HDFS-11280,YARN-6024



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998758#comment-15998758
 ] 

Lei (Eddy) Xu commented on HADOOP-14384:


Hi, [~anu] 

bq. Why make this private if we are not planning to ship this ?

Yes, {{Builder}} interface will be shipped later, with some modifications. In 
the current form, we think it is not ready to be consumed by HDFS users in 
either trunk and branch-2.

bq. Given the comments on the original JIRA, I would presume that this is going 
to be reworked to an extent ?

Yes, the following sub-tasks of this umbrella JIRA (HADOOP-14365) will re-work 
on the Builder interface.

bq. Are you concerned that someone might use this accidentally ? What evil are 
we trying to prevent by making this private ?

As there is several releases planned in short term, this Builder interface, 
which will be an public interface eventually, like {{FileSystem}}, should not 
be consumable for the projects outside of HDFS.  Releasing the current form as 
one of public {{FileSystem}} API is not desired and not advocated.  

Do the above answer your questions, [~anu].

I don't have strong preference about either 1) marking the API as private today 
or 2) pulling it out of {{FileSystem}} and keeping it in 
{{DistributedFileSystem}} for today.  [~Sammi] could you share your opinions? 
Thanks.


> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998715#comment-15998715
 ] 

Anu Engineer commented on HADOOP-14384:
---

[~eddyxu] Thanks for comments. May be I was not very clear with my questions. 
Would you please take a little time and share your thoughts on each of these 
points below. Here are my  specific questions. 

* Why make this private if we are not planning to ship this ? In other words, 
but I still fail to see why we are making this private. The original issue was 
that it was a good concept badly executed. Does not that stand even if we make 
this private ? 

*   Given the comments on the original JIRA, I would presume that this is going 
to be reworked to an extent ?
bq. This new Builder interface is part of larger effort in EC development in 
trunk. So it is desired to be kept in trunk
This is what I am trying to understand, I am all for keeping this in trunk, but 
 are you saying we will rework this or we will not ? making it private does not 
provide a clear picture. 

* Are you concerned that someone might use this accidentally ?  What evil are 
we trying to prevent by making this private ? 







> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998703#comment-15998703
 ] 

Andras Bokor commented on HADOOP-14389:
---

Adding "Incompatible change" since it uses 
[Slitter.splitToList|https://google.github.io/guava/releases/18.0/api/docs/com/google/common/base/Splitter.html#splitToList(java.lang.CharSequence)]
 which has been being available in guava since 15.0.

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14389.01.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14389:
--
Hadoop Flags: Incompatible change

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14389.01.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14389:
--
Status: Patch Available  (was: Open)

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14389.01.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14389:
--
Attachment: HADOOP-14389.01.patch

I have addressed the wrong exceptions with fixing 
{{KerberosName.Rule#replaceParameters}} and making {{ruleParser}} stricter.

bq. \[^\\\]\] does not really make sense
Can be removed by changing some logic in {{KerberosName#parseRules}}. It splits 
the rules by new lines.
This also fixes an error message issue. So far in case of error it logged the 
{{remaining}} which contains all the rules. According to the new logic it logs 
only the rule which throws the exception.

bq. In tests some corner cases are not covered.
I have added all the test cases that came into my mind.

Also I built and replaced hadoop-auth jar on a test cluster.
{code}root@abokor-practice-5:/etc/hadoop-3.0.0-alpha2# 
/etc/hadoop-3.0.0-alpha2/bin/hadoop kerbname 
{nn,dn,rm,nm,jhs}/host.dom...@realm.tld
Name: nn/host.dom...@realm.tld to hdfs
Name: dn/host.dom...@realm.tld to hdfs
Name: rm/host.dom...@realm.tld to yarn
Name: nm/host.dom...@realm.tld to yarn
Name: jhs/host.dom...@realm.tld to mapred{code}

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14389.01.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998698#comment-15998698
 ] 

Lei (Eddy) Xu commented on HADOOP-14384:


Hi, [~anu] Thanks for comments. This new {{Builder}} interface is part of 
larger effort in EC development in trunk. So it is desired to be kept in trunk. 

[~ste...@apache.org] Thanks for the review!



> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998452#comment-15998452
 ] 

Jonathan Eagles commented on HADOOP-14387:
--

Thanks for filing this jira and bringing this to my attention, 
[~ste...@apache.org]. Will post a patch shortly.

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Assignee: Jonathan Eagles
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reassigned HADOOP-14387:


Assignee: Jonathan Eagles

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Assignee: Jonathan Eagles
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998312#comment-15998312
 ] 

Hadoop QA commented on HADOOP-14388:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866612/HADOOP-14388.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a697003c987f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4e6bbd0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12249/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12249/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12249/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP

[jira] [Commented] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998246#comment-15998246
 ] 

Hudson commented on HADOOP-14382:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11690 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11690/])
HADOOP-14382 Remove usages of MoreObjects.toStringHelper. Contributed by 
(stevel: rev 4e6bbd049dead7008942bda09dfd54542c407f48)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/AbstractMetric.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsRegistry.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/MetricsCache.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MsInfo.java
* (edit) 
hadoop-tools/hadoop-kafka/src/test/java/org/apache/hadoop/metrics2/impl/TestKafkaMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsTag.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsInfoImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/AbstractMetricsRecord.java


> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-14389:
-

 Summary: Exception handling is incorrect in KerberosName.java
 Key: HADOOP-14389
 URL: https://issues.apache.org/jira/browse/HADOOP-14389
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


I found multiple inconsistency:

Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
Principal: {{nn/host.dom...@realm.tld}}
Expected exception: {{BadStringFormat: ...3 is out of range...}}
Actual exception: {{ArrayIndexOutOfBoundsException: 3}}

Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
Expected: {{IllegalArgumentException}}
Actual: {{java.lang.NumberFormatException: For input string: ""}}

Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
Expected {{BadStringFormat: -1 is outside of valid range...}}
Actual: {{java.lang.NumberFormatException: For input string: ""}}

Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
Expected {{java.lang.NumberFormatException: For input string: "one"}}
Acutal {{java.lang.NumberFormatException: For input string: ""}}


In addtion:
{code}[^\\]]{code}
does not really make sense in {{ruleParser}}. Most probably it was needed 
because we parse the whole rule string and remove the parsed rule from 
beginning of the string: {{KerberosName#parseRules}}. This made the regex 
engine parse wrong without it.

In addition:
In tests some corner cases are not covered.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Attachment: (was: HADOOP-14388-2.8.x.patch)

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Attachment: HADOOP-14388.patch

A patch for trunk.

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388-2.8.x.patch, HADOOP-14388.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998237#comment-15998237
 ] 

Hadoop QA commented on HADOOP-14388:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-14388 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866611/HADOOP-14388-2.8.x.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12248/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388-2.8.x.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Attachment: HADOOP-14388-2.8.x.patch

A patch for 2.8.x + 2.7.x

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388-2.8.x.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Attachment: (was: HADOOP-14388.patch)

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14382:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

commited to trunk. Thanks!

> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998232#comment-15998232
 ] 

Steve Loughran commented on HADOOP-14387:
-

I think may the story here would be to declare that some resources could be 
absent, with the -site.xml on the list; that way if there's no hdfs-default, 
core-default things would fail fast, but the site ones were considered optional

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998230#comment-15998230
 ] 

Hadoop QA commented on HADOOP-14388:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-14388 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866605/HADOOP-14388.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12247/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Attachment: HADOOP-14388.patch

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-14388.patch
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HADOOP-14388:
-
Status: Patch Available  (was: Open)

> Don't set the key password if there is a problem reading SSL configuration
> --
>
> Key: HADOOP-14388
> URL: https://issues.apache.org/jira/browse/HADOOP-14388
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.0
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.7.4, 2.8.1
>
>
> When launching MiniDFSCluster in a test, with 
> "dfs.data.transfer.protection=integrity" and without specifying a 
> ssl-server.xml, the code hangs on "builder.build()". 
> This is because in HttpServer2, it is setting a null value on the 
> SslSocketConnector:
> c.setKeyPassword(keyPassword);
> Instead, this call should be inside the "if (keystore != null) {" block. Once 
> this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)
Colm O hEigeartaigh created HADOOP-14388:


 Summary: Don't set the key password if there is a problem reading 
SSL configuration
 Key: HADOOP-14388
 URL: https://issues.apache.org/jira/browse/HADOOP-14388
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.3, 2.8.0
Reporter: Colm O hEigeartaigh
Assignee: Colm O hEigeartaigh
Priority: Minor
 Fix For: 2.7.4, 2.8.1


When launching MiniDFSCluster in a test, with 
"dfs.data.transfer.protection=integrity" and without specifying a 
ssl-server.xml, the code hangs on "builder.build()". 

This is because in HttpServer2, it is setting a null value on the 
SslSocketConnector:

c.setKeyPassword(keyPassword);

Instead, this call should be inside the "if (keystore != null) {" block. Once 
this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998173#comment-15998173
 ] 

Steve Loughran commented on HADOOP-14387:
-

cc [~jeagles]

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998172#comment-15998172
 ] 

Steve Loughran commented on HADOOP-14387:
-

I'm assuming this is HADOOP-14216 related

we don't see this in hadoop's own tests as we do tend to have a core-site in 
test/resources; this is one of mine [which 
doesnt|https://github.com/steveloughran/spark-cloud-examples/tree/master/cloud-examples]

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14387) new Configuration().get() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14387:

Summary: new Configuration().get() fails if core-site.xml isn't on the 
classpath  (was: new Configuration() fails if core-site.xml isn't on the 
classpath)

> new Configuration().get() fails if core-site.xml isn't on the classpath
> ---
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14387) new Configuration() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998164#comment-15998164
 ] 

Steve Loughran commented on HADOOP-14387:
-

{code}
- Check Hadoop version *** FAILED ***
  java.lang.RuntimeException: java.io.IOException: Fetch fail on include with 
no fallback while loading 'core-site.xml'
  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2872)
  at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2657)
  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
  at org.apache.hadoop.conf.Configuration.get(Configuration.java:1325)
  at 
com.hortonworks.spark.cloud.CloudSuite$.loadConfiguration(CloudSuite.scala:353)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply$mcV$sp(HadoopVersionSuite.scala:32)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply(HadoopVersionSuite.scala:31)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply(HadoopVersionSuite.scala:31)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
  ...
  Cause: java.io.IOException: Fetch fail on include with no fallback while 
loading 'core-site.xml'
  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2831)
  at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2657)
  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
  at org.apache.hadoop.conf.Configuration.get(Configuration.java:1325)
  at 
com.hortonworks.spark.cloud.CloudSuite$.loadConfiguration(CloudSuite.scala:353)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply$mcV$sp(HadoopVersionSuite.scala:32)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply(HadoopVersionSuite.scala:31)
  at 
com.hortonworks.spark.cloud.common.HadoopVersionSuite$$anonfun$1.apply(HadoopVersionSuite.scala:31)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
{code}

> new Configuration() fails if core-site.xml isn't on the classpath
> -
>
> Key: HADOOP-14387
> URL: https://issues.apache.org/jira/browse/HADOOP-14387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha3
> Environment: test run in downstream project with no core-site in 
> test/resources
>Reporter: Steve Loughran
>Priority: Blocker
>
> If you try to create a config via {{new Configuration()}} and there isn't a 
> {{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
> the failure to load.
> This is a regression which breaks downstream apps that don't need a core-site 
> to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14387) new Configuration() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14387:
---

 Summary: new Configuration() fails if core-site.xml isn't on the 
classpath
 Key: HADOOP-14387
 URL: https://issues.apache.org/jira/browse/HADOOP-14387
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0-alpha3
 Environment: test run in downstream project with no core-site in 
test/resources
Reporter: Steve Loughran
Priority: Blocker


If you try to create a config via {{new Configuration()}} and there isn't a 
{{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
the failure to load.

This is a regression which breaks downstream apps that don't need a core-site 
to run, but do want to load core-default &c



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998158#comment-15998158
 ] 

Steve Loughran commented on HADOOP-14384:
-

I think it should stay in trunk, but we may want to pull it out of FileSystem 
and keep in HDFS so that specific code can use it

For now though. the patch LGTM

+1

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13372) MR jobs can not access Swift filesystem if Kerberos is enabled

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998142#comment-15998142
 ] 

Steve Loughran commented on HADOOP-13372:
-

As soon as the test is pulled down I'll commit this

> MR jobs can not access Swift filesystem if Kerberos is enabled
> --
>
> Key: HADOOP-13372
> URL: https://issues.apache.org/jira/browse/HADOOP-13372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/swift, security
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-13372.001.patch
>
>
> {code}
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:262)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:303)
> at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:524)
> at 
> org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> Caused by: java.net.UnknownHostException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998001#comment-15998001
 ] 

Steve Loughran commented on HADOOP-14382:
-

LGTM

+1

> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998000#comment-15998000
 ] 

Steve Loughran commented on HADOOP-9565:


yes, the committer stuff is obsoleted for now by HADOO-13786. That can make use 
of a probe for a feature though, especially to see if there's a consistent FS 
(actually, FileOutputCommitter could do that check internally, and warn on any 
FS (s3 unless an it says otherwise, swift).

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-008.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997995#comment-15997995
 ] 

Steve Loughran commented on HADOOP-9565:


Linking to HDFS-11644, which relates to something similar for exposing 
capabilities of specific streams. As well as covering syncable-ness (v. 
important for HBase), we could share some other facts about the streams.

I had a talk with Mingliang about this yesterday, explained why I like strings 
over enums

# not brittle to versions: code build against, say, Hadoop 3.1.3 could still 
probe Hadoop 3.1.1 for a feature only added in Hadoop 3.1.2
# lets specific filesystems declare specific capabilities without having to 
make changes in the base class.

Looking at the recent patches, though: over complex. I'm now considering

# just use a Configuration(false) object as a way of declaring behaviour
# something equivalent to contract-test-options.xml for each FS, using a naming 
scheme like "fs.feature.*; different stores could add their own, e.g. 
"fs.s3a.feature.consistent"". 
# pull out anything for validating feature sets into a helper class
# make the probe {{hasFeature(path, feature)}} so that filesystems which relay 
FS calls (e.g. viewfs, webhdfs) can propagate the probe.

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-008.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14386) Make trunk work with Guava 11.0.2 again

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997979#comment-15997979
 ] 

Hadoop QA commented on HADOOP-14386:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 5 new + 233 unchanged 
- 10 fixed = 238 total (was 243) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestQueueManager |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.ser

[jira] [Commented] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997969#comment-15997969
 ] 

Hadoop QA commented on HADOOP-14379:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HADOOP-14379 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14379 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866539/HADOOP-14379.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12246/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0-alpha2
>Reporter: lixinglong
> Attachments: HADOOP-14379.001.patch, HADOOP-14379.002.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 T

[jira] [Updated] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-05 Thread lixinglong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixinglong updated HADOOP-14379:

Affects Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0-alpha2
>Reporter: lixinglong
> Attachments: HADOOP-14379.001.patch, HADOOP-14379.002.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:05 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: