[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-03-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405837#comment-16405837
 ] 

genericqa commented on HADOOP-12760:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
32s{color} | {color:green} root generated 0 new + 1266 unchanged - 6 fixed = 
1266 total (was 1272) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-12760 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915252/HADOOP-12760.07.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0b63c552f323 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e65ff1c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14330/testReport/ |
| Max. process+thread count | 1426 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14330/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HADOOP-15328) Fix the typo in HttpAuthentication.md

2018-03-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405797#comment-16405797
 ] 

genericqa commented on HADOOP-15328:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915246/HADOOP-15328.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 1339c14d9e61 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e65ff1c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14329/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the typo in HttpAuthentication.md
> -
>
> Key: HADOOP-15328
> URL: https://issues.apache.org/jira/browse/HADOOP-15328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15328.001.patch
>
>
> There is a typo \{{AuthenticatorHandler}} in HttpAuthentication.md. 
>  
> {code:java}
> If a custom authentication mechanism is required for the HTTP web-consoles, 
> it is possible to implement a plugin to support the alternate authentication 
> mechanism (refer to Hadoop hadoop-auth for details on writing an 
> AuthenticatorHandler).
> {code}
> There is not an \{{AuthenticatorHandler}} in Hadoop hadoop-auth. 
> \{{AuthenticatorHandler}} should be replaced with \{{AuthenticationHandler}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-03-19 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405791#comment-16405791
 ] 

Akira Ajisaka commented on HADOOP-12760:


Thanks [~ajayydv]. Updated the patch to add logging statements in 
CryptoStreamUtils and NativeIO when unmap failed.

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-03-19 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-12760:
---
Attachment: HADOOP-12760.07.patch

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405772#comment-16405772
 ] 

genericqa commented on HADOOP-14067:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 30s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-14067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915231/HADOOP-14067.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d08316447b8c 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e65ff1c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14328/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14328/testReport/ |
| Max. process+thread count | 1586 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-13024) Distcp with -delete feature on raw data not implemented

2018-03-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405758#comment-16405758
 ] 

Wei-Chiu Chuang commented on HADOOP-13024:
--

I think you should use target version instead of the fix version. 2.8.0 and 
3.0.0-alpha2 were already released.

> Distcp with -delete feature on raw data not implemented
> ---
>
> Key: HADOOP-13024
> URL: https://issues.apache.org/jira/browse/HADOOP-13024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Assignee: Mavin Martin
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13024.patch, HADOOP-13024.patch, 
> HADOOP-13024.patch.10, HADOOP-13024.patch.3, HADOOP-13024.patch.4, 
> HADOOP-13024.patch.5, HADOOP-13024.patch.6, HADOOP-13024.patch.7, 
> HADOOP-13024.patch.8, HADOOP-13024.patch.9
>
>
> When doing distcp of raw data using -delete feature, following bug appears.
> {code}
> [root@xxx bin]# hadoop distcp -delete -update /.reserved/raw/tmp/a 
> /.reserved/raw/tmp/b
> 16/04/14 02:54:01 ERROR tools.DistCp: Exception encountered
> java.io.IOException: DistCp failure: Job job_xxx has failed: Job commit 
> failed: org.apache.hadoop.tools.CopyListing$InvalidInputException: The source 
> path 'hdfs://nn/.reserved/raw/tmp/b' starts with /.reserved/raw but the 
> target path 'hdfs://nn/NONE' does not. Either all or none of the paths must 
> have this prefix.
> at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:141)
> at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
> at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
> at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
> at 
> org.apache.hadoop.tools.mapred.CopyCommitter.deleteMissing(CopyCommitter.java:244)
> at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:94)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:187)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {code}
> The issue is not with the distributed copy, the issue is when it tries to 
> delete things in the target that no longer exist in the source, it 
> revalidates to make sure NONE is in the /.reserved/raw domain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15328) Fix the typo in HttpAuthentication.md

2018-03-19 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15328:
-
Status: Patch Available  (was: Open)

> Fix the typo in HttpAuthentication.md
> -
>
> Key: HADOOP-15328
> URL: https://issues.apache.org/jira/browse/HADOOP-15328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15328.001.patch
>
>
> There is a typo \{{AuthenticatorHandler}} in HttpAuthentication.md. 
>  
> {code:java}
> If a custom authentication mechanism is required for the HTTP web-consoles, 
> it is possible to implement a plugin to support the alternate authentication 
> mechanism (refer to Hadoop hadoop-auth for details on writing an 
> AuthenticatorHandler).
> {code}
> There is not an \{{AuthenticatorHandler}} in Hadoop hadoop-auth. 
> \{{AuthenticatorHandler}} should be replaced with \{{AuthenticationHandler}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15328) Fix the typo in HttpAuthentication.md

2018-03-19 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15328:
-
Attachment: HADOOP-15328.001.patch

> Fix the typo in HttpAuthentication.md
> -
>
> Key: HADOOP-15328
> URL: https://issues.apache.org/jira/browse/HADOOP-15328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15328.001.patch
>
>
> There is a typo \{{AuthenticatorHandler}} in HttpAuthentication.md. 
>  
> {code:java}
> If a custom authentication mechanism is required for the HTTP web-consoles, 
> it is possible to implement a plugin to support the alternate authentication 
> mechanism (refer to Hadoop hadoop-auth for details on writing an 
> AuthenticatorHandler).
> {code}
> There is not an \{{AuthenticatorHandler}} in Hadoop hadoop-auth. 
> \{{AuthenticatorHandler}} should be replaced with \{{AuthenticationHandler}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15328) Fix the typo in HttpAuthentication.md

2018-03-19 Thread fang zhenyi (JIRA)
fang zhenyi created HADOOP-15328:


 Summary: Fix the typo in HttpAuthentication.md
 Key: HADOOP-15328
 URL: https://issues.apache.org/jira/browse/HADOOP-15328
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: fang zhenyi
Assignee: fang zhenyi


There is a typo \{{AuthenticatorHandler}} in HttpAuthentication.md. 

 
{code:java}
If a custom authentication mechanism is required for the HTTP web-consoles, it 
is possible to implement a plugin to support the alternate authentication 
mechanism (refer to Hadoop hadoop-auth for details on writing an 
AuthenticatorHandler).
{code}
There is not an \{{AuthenticatorHandler}} in Hadoop hadoop-auth. 
\{{AuthenticatorHandler}} should be replaced with \{{AuthenticationHandler}}.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14667) Flexible Visual Studio support

2018-03-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405701#comment-16405701
 ] 

Brahma Reddy Battula edited comment on HADOOP-14667 at 3/20/18 3:02 AM:


{quote}I also tested VS2010 and also worked
{quote}
Getting the following error, do we need to update anything to work on VS2010..?
{noformat}
[DEBUG] Toolchains are ignored, 'executable' parameter is set to 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-supp
ort\bin\win-vs-upgrade.cmd
[DEBUG] Executing command line: [cmd, /c, 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-support\bin\win-vs-upgrade.
cmd, F:\t\hadoop-common-project\hadoop-common\src\main\winutils, 
F:\t\hadoop-common-project\hadoop-common\target]
INFO: Could not find files for the given pattern(s).
"devenv command was not found. Verify your compiler installation level."
{noformat}


was (Author: brahmareddy):
bq.I also tested VS2010 and also worked
Getting the following error, do we need to update anything to work on VS2010..? 
wnt' be incompatible..?

{noformat}
[DEBUG] Toolchains are ignored, 'executable' parameter is set to 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-supp
ort\bin\win-vs-upgrade.cmd
[DEBUG] Executing command line: [cmd, /c, 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-support\bin\win-vs-upgrade.
cmd, F:\t\hadoop-common-project\hadoop-common\src\main\winutils, 
F:\t\hadoop-common-project\hadoop-common\target]
INFO: Could not find files for the given pattern(s).
"devenv command was not found. Verify your compiler installation level."
{noformat}

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HADOOP-14067:
---
Attachment: HADOOP-14067.02.patch

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405705#comment-16405705
 ] 

Thejas M Nair commented on HADOOP-14067:


Attaching 02.patch with checkstyle, javadoc fixes.


> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch, 
> HADOOP-14067.02.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support

2018-03-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405701#comment-16405701
 ] 

Brahma Reddy Battula commented on HADOOP-14667:
---

bq.I also tested VS2010 and also worked
Getting the following error, do we need to update anything to work on VS2010..? 
wnt' be incompatible..?

{noformat}
[DEBUG] Toolchains are ignored, 'executable' parameter is set to 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-supp
ort\bin\win-vs-upgrade.cmd
[DEBUG] Executing command line: [cmd, /c, 
F:\t\hadoop-common-project\hadoop-common\..\..\dev-support\bin\win-vs-upgrade.
cmd, F:\t\hadoop-common-project\hadoop-common\src\main\winutils, 
F:\t\hadoop-common-project\hadoop-common\target]
INFO: Could not find files for the given pattern(s).
"devenv command was not found. Verify your compiler installation level."
{noformat}

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-19 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405682#comment-16405682
 ] 

shanyu zhao commented on HADOOP-15320:
--

I've run a few Spark jobs on very large input file (hundreds of TB) and the 
getSplits() on this file took a few seconds, vs. 1.5 hours without the change.

I'm in the middle of running hive tpch tests.

Anything else we should run?

As [~chris.douglas] mentioned, since S3A is running file, we should be good to 
go for this patch.

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405672#comment-16405672
 ] 

genericqa commented on HADOOP-14067:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 12 unchanged - 0 fixed = 19 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-14067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12915214/HADOOP-14067.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9fdb6a17b4c9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3fc3fa9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14327/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14327/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-8978) TestTrash fails on Windows

2018-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405669#comment-16405669
 ] 

Íñigo Goiri commented on HADOOP-8978:
-

This is currently failing again with a related error:
{code}
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:135)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3353)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:618)
...
Caused by: java.lang.NullPointerException
at org.apache.hadoop.fs.Path.(Path.java:146)
at org.apache.hadoop.fs.Path.makeQualified(Path.java:540)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:589)
at 
org.apache.hadoop.fs.TestTrash$TestLFS$1.getInitialWorkingDirectory(TestTrash.java:691)
at 
org.apache.hadoop.fs.RawLocalFileSystem.(RawLocalFileSystem.java:73)
at org.apache.hadoop.fs.TestTrash$TestLFS$1.(TestTrash.java:688)
at org.apache.hadoop.fs.TestTrash$TestLFS.(TestTrash.java:688)
at org.apache.hadoop.fs.TestTrash$TestLFS.(TestTrash.java:685)
{code}
I'll open a new JIRA if nobody has comments on this.

> TestTrash fails on Windows
> --
>
> Key: HADOOP-8978
> URL: https://issues.apache.org/jira/browse/HADOOP-8978
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Fix For: trunk-win
>
> Attachments: HADOOP-8978-branch-trunk-win.patch
>
>
> The tests assert that a file is found in trash after deleting, but it's not 
> found when run on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-03-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405657#comment-16405657
 ] 

Konstantin Shvachko commented on HADOOP-15253:
--

The change looks good to me. Minor things for the unit test:
# import of {{DFSConfigKeys}} is redundant.
# Would be better to incorporate the check of the queue size change into 
{{testRefresh()}} instead of creating a new test case. That way the test will 
not start up mini cluster one more time, will run faster.

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support

2018-03-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405632#comment-16405632
 ] 

Hudson commented on HADOOP-14667:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13856 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13856/])
HADOOP-14667. Flexible Visual Studio support. Contributed by Allen (cdouglas: 
rev 3fc3fa9711d96677f6149e173df0f57cd06ee6b9)
* (edit) BUILDING.txt
* (add) dev-support/bin/win-vs-upgrade.cmd
* (add) dev-support/win-paths-eg.cmd
* (edit) hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml


> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15320) Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake

2018-03-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405583#comment-16405583
 ] 

Chris Douglas commented on HADOOP-15320:


What testing has been done with this, already?

bq. do think it will need be bounced past the various tools, including: hive, 
spark, pig to see that it all goes OK. But given S3A is using that default with 
no adverse consequences, I think you'll be right.
Wouldn't one expect the same results, if the pattern worked for S3A? One would 
expect to find framework code that is unnecessarily serial after this change. 
What tests did S3A run that should be repeated?

bq. which endpoints did you run the entire hadoop-azure and 
hadoop-azuredatalake test suites?
Running these integration tests is a good idea. It's why they're there, after 
all.

> Remove customized getFileBlockLocations for hadoop-azure and 
> hadoop-azure-datalake
> --
>
> Key: HADOOP-15320
> URL: https://issues.apache.org/jira/browse/HADOOP-15320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, fs/azure
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Attachments: HADOOP-15320.patch
>
>
> hadoop-azure and hadoop-azure-datalake have its own implementation of 
> getFileBlockLocations(), which faked a list of artificial blocks based on the 
> hard-coded block size. And each block has one host with name "localhost". 
> Take a look at this code:
> [https://github.com/apache/hadoop/blob/release-2.9.0-RC3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L3485]
> This is a unnecessary mock up for a "remote" file system to mimic HDFS. And 
> the problem with this mock is that for large (~TB) files we generates lots of 
> artificial blocks, and FileInputFormat.getSplits() is slow in calculating 
> splits based on these blocks.
> We can safely remove this customized getFileBlockLocations() implementation, 
> fall back to the default FileSystem.getFileBlockLocations() implementation, 
> which is to return 1 block for any file with 1 host "localhost". Note that 
> this doesn't mean we will create much less splits, because the number of 
> splits is still limited by the blockSize in 
> FileInputFormat.computeSplitSize():
> {code:java}
> return Math.max(minSize, Math.min(goalSize, blockSize));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405520#comment-16405520
 ] 

Jitendra Nath Pandey edited comment on HADOOP-14067 at 3/19/18 11:21 PM:
-

+1

[~thejas], please review the javadoc, checkstyle issues in the patch.


was (Author: jnp):
+1

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14667) Flexible Visual Studio support

2018-03-19 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14667:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Thanks, [~elgoiri].

I committed this. Thanks, Allen

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HADOOP-14067:
---
Attachment: HADOOP-14067.01.patch

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch, HADOOP-14067.01.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341661#comment-16341661
 ] 

Bharat Viswanadham edited comment on HADOOP-14067 at 3/19/18 10:21 PM:
---

[~thejas]

The patch is having checkstyle issues and javadoc errors.

Other than that +1 from me.

 

 


was (Author: bharatviswa):
The patch is having checkstyle issues and javadoc errors.

Other than that +1 from me.

 

 

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341661#comment-16341661
 ] 

Bharat Viswanadham edited comment on HADOOP-14067 at 3/19/18 10:20 PM:
---

The patch is having checkstyle issues and javadoc errors.

Other than that +1 from me.

 

 


was (Author: bharatviswa):
+1

LGTM.

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2018-03-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405541#comment-16405541
 ] 

Yufei Gu commented on HADOOP-15062:
---

Thanks [~miklos.szeg...@cloudera.com]. +1.

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15322) LDAPGroupMapping search tree base improvement

2018-03-19 Thread Sherwood Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng reassigned HADOOP-15322:
---

Assignee: Sherwood Zheng

> LDAPGroupMapping search tree base improvement
> -
>
> Key: HADOOP-15322
> URL: https://issues.apache.org/jira/browse/HADOOP-15322
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 2.7.4
>Reporter: Ganesh
>Assignee: Sherwood Zheng
>Priority: Major
>
> Currently the same ldap base is used for searching posixAccount and 
> posixGroup. This request is to make a separate base for each container (ie 
> posixAccount and posixGroup container)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2018-03-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405521#comment-16405521
 ] 

Miklos Szegedi commented on HADOOP-15062:
-

I reran the Jenkins job. [~yufeigu], there are no issues this time.

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader

2018-03-19 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405520#comment-16405520
 ] 

Jitendra Nath Pandey commented on HADOOP-14067:
---

+1

> VersionInfo should load version-info.properties from its own classloader
> 
>
> Key: HADOOP-14067
> URL: https://issues.apache.org/jira/browse/HADOOP-14067
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HADOOP-14067.01.patch
>
>
> org.apache.hadoop.util.VersionInfo loads the version-info.properties file via 
> the current thread classloader.
> However, in case of applications that are using hadoop classes dynamically  
> (eg jdbc based tools such as SQuirreL SQL) the current thread might not be 
> the one that loaded the hadoop classes including VersionInfo, and it would 
> fail to fine the properties file.
> The right place to look for the properties file is in the classloader of 
> VersionInfo class, as right version is the one that is associated with rest 
> of the loaded hadoop classes,  and not necessarily the one in current thread 
> classloader.
> Created a related jira - HADOOP-14066 to make methods to get version via 
> VersionInfo a public api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11245) Update NFS gateway to use Netty4

2018-03-19 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405507#comment-16405507
 ] 

Bharat Viswanadham commented on HADOOP-11245:
-

[~brandonli]

Can I take up this task, if you are not actively working on this?

> Update NFS gateway to use Netty4
> 
>
> Key: HADOOP-11245
> URL: https://issues.apache.org/jira/browse/HADOOP-11245
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2018-03-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-15327:
---

 Summary: Upgrade MR ShuffleHandler to use Netty4
 Key: HADOOP-15327
 URL: https://issues.apache.org/jira/browse/HADOOP-15327
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Bharat Viswanadham


This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-19 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405479#comment-16405479
 ] 

Ajay Kumar commented on HADOOP-15317:
-

[~xiaochen], thanks for updating the patch. I think we can handle slowness in 
best case for #4 while maintaining equal probability of available nodes by 
checking if first random int in available leafs is excluded or not.
{code}
int nthValidToReturn = r.nextInt(parentNode.getNumOfLeaves());
LOG.debug("nthValidToReturn is {}", nthValidToReturn);
if (nthValidToReturn < 0) {
  return null;
}
Node ret = null;
ret = parentNode.getLeaf(nthValidToReturn, excludedScopeNode);
if (!excludedNodes.contains(ret)) {
  return ret;
}

Node lastValidNode = null;
nthValidToReturn = r.nextInt(availableNodes);{code}

Few comments on patch v2:
* L487 {{testChooseRandomInclude1}} excluded node dataNodes[7] (in  "/d2/r3") 
is outside the scope of our search "/d1". Not sure if it is intentional but i 
think we can safely remove it as it is not used in the test flow. Please 
correct me if my understanding is not correct.
* L485-L487 {{testChooseRandomInclude1}}, L511-514 testChooseRandomInclude2: 
Shall we randomly select excluded nodes.
{code}
 Random r = new Random();
r.nextInt(5);
excludedNodes.add(dataNodes[r.nextInt(5)]);
excludedNodes.add(dataNodes[r.nextInt(5)]);
   // excludedNodes.add(dataNodes[7]);
Map frequency = pickNodesAtRandom(1000, scope,
excludedNodes);
excludedNodes.parallelStream().forEach( node -> {
  assertEquals(node.getName() + " should be exclude", 0,
  frequency.get(node).intValue());
});
{code} 

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15326) ClassUtil usage of URLDecode precludes '+' in jar path

2018-03-19 Thread Sean Story (JIRA)
Sean Story created HADOOP-15326:
---

 Summary: ClassUtil usage of URLDecode precludes '+' in jar path
 Key: HADOOP-15326
 URL: https://issues.apache.org/jira/browse/HADOOP-15326
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
 Environment: Java: 1.8.0_111
hadoop: 2.5.2

OSX: 10.13.3
Reporter: Sean Story


h3. Problem
ClassUtil utilizes {{URLDecoder}} to decode the path to the jar containing the 
provided {{Class}}. However, as noted here: 
https://bugs.openjdk.java.net/browse/JDK-8179507?focusedCommentId=14074306=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14074306
 {{URLDecoder}} should only be used for HTML forms, because it causes issues 
with plus signs (and other characters). 

I can demonstrate the issue in the below Spock Specification:
{noformat}
import spock.lang.Specification

class Testy extends Specification{
def "testy"(){
setup:
URL url = new 
URL("jar:file:/path/to/some+dir/hadoop-archives-2.5.2.jar!/org/apache/hadoop/tools/HadoopArchives.class")

when:
println url
def path = url.getPath()
println path
def other = URLDecoder.decode(path, "UTF-8")
println other

then:
path.contains("+")
other.contains("+")
}
}
{noformat}

This was run into while attempting to create a HAR file, while my 
{{hadoop-archives.jar}} was in a directory that had a {{+}} char in its name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2018-03-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405368#comment-16405368
 ] 

genericqa commented on HADOOP-15062:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898785/HADOOP-15062.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 5fef9846d752 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a08921c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14326/testReport/ |
| Max. process+thread count | 1589 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14326/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at 

[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2018-03-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405239#comment-16405239
 ] 

Yufei Gu commented on HADOOP-15062:
---

[~miklos.szeg...@cloudera.com], Thanks for working on this. Is the unit test 
failure related?

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.4

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-14652:

Fix Version/s: 3.1.0

> Update metrics-core version to 3.2.4
> 
>
> Key: HADOOP-14652
> URL: https://issues.apache.org/jira/browse/HADOOP-14652
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, 
> HADOOP-14652.003.patch, HADOOP-14652.004.patch, HADOOP-14652.005.patch, 
> HADOOP-14652.006.patch
>
>
> The current artifact is:
> com.codehale.metrics:metrics-core:3.0.1
> That version could either be bumped to 3.0.2 (the latest of that line), or 
> use the latest artifact:
> io.dropwizard.metrics:metrics-core:3.2.4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15305:

Fix Version/s: 3.1.0

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch
>
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15191) Add Private/Unstable BulkDelete operations to supporting object stores for DistCP

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15191:

Fix Version/s: 3.1.0

> Add Private/Unstable BulkDelete operations to supporting object stores for 
> DistCP
> -
>
> Key: HADOOP-15191
> URL: https://issues.apache.org/jira/browse/HADOOP-15191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15191-001.patch, HADOOP-15191-002.patch, 
> HADOOP-15191-003.patch, HADOOP-15191-004.patch
>
>
> Large scale DistCP with the -delete option doesn't finish in a viable time 
> because of the final CopyCommitter doing a 1 by 1 delete of all missing 
> files. This isn't randomized (the list is sorted), and it's throttled by AWS.
> If bulk deletion of files was exposed as an API, distCP would do 1/1000 of 
> the REST calls, so not get throttled.
> Proposed: add an initially private/unstable interface for stores, 
> {{BulkDelete}} which declares a page size and offers a 
> {{bulkDelete(List)}} operation for the bulk deletion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15287:

Fix Version/s: 3.1.0

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13374) Add the L verification script

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13374:

Fix Version/s: 3.1.0

> Add the L verification script
> ---
>
> Key: HADOOP-13374
> URL: https://issues.apache.org/jira/browse/HADOOP-13374
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-13374.01.patch, HADOOP-13374.02.patch, 
> HADOOP-13374.03.patch, HADOOP-13374.04.patch
>
>
> This is the script that's used for L change verification during 
> HADOOP-12893. We should commit this as [~ozawa] 
> [suggested|https://issues.apache.org/jira/browse/HADOOP-13298?focusedCommentId=15374498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15374498].
> I was 
> [initially|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15283040=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15283040]
>  verifying from an on-fly shell command, and [~andrew.wang] contributed the 
> script later in [a comment|
> https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15303281=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303281],
>  so most credit should go to him. :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15234) Throw meaningful message on null when initializing KMSWebApp

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15234:

Fix Version/s: 3.1.0

> Throw meaningful message on null when initializing KMSWebApp
> 
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15271:

Fix Version/s: 3.1.0

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-6852) apparent bug in concatenated-bzip2 support (decoding)

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-6852:
---
Fix Version/s: 3.1.0

> apparent bug in concatenated-bzip2 support (decoding)
> -
>
> Key: HADOOP-6852
> URL: https://issues.apache.org/jira/browse/HADOOP-6852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.22.0
> Environment: Linux x86_64 running 32-bit Hadoop, JDK 1.6.0_15
>Reporter: Greg Roelofs
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-6852.01.patch, HADOOP-6852.02.patch, 
> HADOOP-6852.03.patch, HADOOP-6852.04.patch
>
>
> The following simplified code (manually picked out of testMoreBzip2() in 
> https://issues.apache.org/jira/secure/attachment/12448272/HADOOP-6835.v4.trunk-hadoop-mapreduce.patch)
>  triggers a "java.io.IOException: bad block header" in 
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock( 
> CBZip2InputStream.java:527):
> {noformat}
> JobConf jobConf = new JobConf(defaultConf);
> CompressionCodec bzip2 = new BZip2Codec();
> ReflectionUtils.setConf(bzip2, jobConf);
> localFs.delete(workDir, true);
> // copy multiple-member test file to HDFS
> String fn2 = "testCompressThenConcat.txt" + bzip2.getDefaultExtension();
> Path fnLocal2 = new 
> Path(System.getProperty("test.concat.data","/tmp"),fn2);
> Path fnHDFS2  = new Path(workDir, fn2);
> localFs.copyFromLocalFile(fnLocal2, fnHDFS2);
> FileInputFormat.setInputPaths(jobConf, workDir);
> final FileInputStream in2 = new FileInputStream(fnLocal2.toString());
> CompressionInputStream cin2 = bzip2.createInputStream(in2);
> LineReader in = new LineReader(cin2);
> Text out = new Text();
> int numBytes, totalBytes=0, lineNum=0;
> while ((numBytes = in.readLine(out)) > 0) {
>   ++lineNum;
>   totalBytes += numBytes;
> }
> in.close();
> {noformat}
> The specified file is also included in the H-6835 patch linked above, and 
> some additional debug output is included in the commented-out test loop 
> above.  (Only in the linked, "v4" version of the patch, however--I'm about to 
> remove the debug stuff for checkin.)
> It's possible I've done something completely boneheaded here, but the file, 
> at least, checks out in a subsequent set of subtests and with stock bzip2 
> itself.  Only the code above is problematic; it reads through the first 
> concatenated chunk (17 lines of text) just fine but chokes on the header of 
> the second one.  Altogether, the test file contains 84 lines of text and 4 
> concatenated bzip2 files.
> (It's possible this is a mapreduce issue rather than common, but note that 
> the identical gzip test works fine.  Possibly it's related to the 
> stream-vs-decompressor dichotomy, though; intentionally not supported?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15261:

Fix Version/s: 3.1.0

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Assignee: PandaMonkey
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: 347.patch, hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15293) TestLogLevel fails on Java 9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15293:

Fix Version/s: 3.1.0

> TestLogLevel fails on Java 9
> 
>
> Key: HADOOP-15293
> URL: https://issues.apache.org/jira/browse/HADOOP-15293
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.log.TestLogLevel
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 
> s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel
> [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel)  
> Time elapsed: 1.179 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Unrecognized SSL message' but got unexpected exception: 
> javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>   at 
> java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15280) TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail in trunk

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15280:

Fix Version/s: 3.1.0

> TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail 
> in trunk
> -
>
> Key: HADOOP-15280
> URL: https://issues.apache.org/jira/browse/HADOOP-15280
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Ray Chiang
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15280.00.patch, HADOOP-15280.01.patch
>
>
> I'm seeing these messages on OS X and on Linux.
> {noformat}
> [ERROR] Failures:
> [ERROR] 
> TestKMS.testWebHDFSProxyUserKerb:2526->doWebHDFSProxyUserTest:2625->runServer:158->runServer:176
>  org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Error while authenticating with endpoint: 
> http://localhost:56112/kms/v1/keys?doAs=foo1
> [ERROR] 
> TestKMS.testWebHDFSProxyUserSimple:2531->doWebHDFSProxyUserTest:2625->runServer:158->runServer:176
>  org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Error while authenticating with endpoint: 
> http://localhost:56206/kms/v1/keys?doAs=foo1 
> {noformat}
> as well as a [recent PreCommit-HADOOP-Build 
> job|https://builds.apache.org/job/PreCommit-HADOOP-Build/14235/].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15252:

Fix Version/s: 3.1.0

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15252.001.patch, HADOOP-15252.002.patch, 
> HADOOP-15252.003.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-12897:

Fix Version/s: 3.1.0

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch, HADOOP-12897.005.patch, 
> HADOOP-12897.006.patch, HADOOP-12897.007.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15208) DistCp to offer -xtrack option to save src/dest filesets as alternative to delete()

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15208:

Fix Version/s: 3.1.0

> DistCp to offer -xtrack  option to save src/dest filesets as 
> alternative to delete()
> --
>
> Key: HADOOP-15208
> URL: https://issues.apache.org/jira/browse/HADOOP-15208
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15208-001.patch, HADOOP-15208-002.patch, 
> HADOOP-15208-002.patch, HADOOP-15208-003.patch
>
>
> There are opportunities to improve distcp delete performance and scalability 
> with object stores, but you need to test with production datasets to 
> determine if the optimizations work, don't run out of memory, etc.
> By adding the option to save the sequence files of source, dest listings, 
> people (myself included) can experiment with different strategies before 
> trying to commit one which doesn't scale



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15274) Move hadoop-openstack to slf4j

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15274:

Fix Version/s: 3.1.0

> Move hadoop-openstack to slf4j
> --
>
> Key: HADOOP-15274
> URL: https://issues.apache.org/jira/browse/HADOOP-15274
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/swift
>Reporter: Steve Loughran
>Assignee: fang zhenyi
>Priority: Minor
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15274.001.patch, HADOOP-15274.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15294) TestUGILoginFromKeytab fails on Java9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15294:

Fix Version/s: 3.1.0

> TestUGILoginFromKeytab fails on Java9
> -
>
> Key: HADOOP-15294
> URL: https://issues.apache.org/jira/browse/HADOOP-15294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15294.1.patch
>
>
> This is the same cause as HADOOP-15291, but this time we may need to fix 
> {{UserGroupInformation}}.
> {noformat}
> [ERROR] 
> testReloginAfterFailedRelogin(org.apache.hadoop.security.TestUGILoginFromKeytab)
>   Time elapsed: 1.157 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException:
> Login failure for user: us...@example.com 
> javax.security.auth.login.LoginException: java.lang.NullPointerException: 
> invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
> ...
>   at 
> org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.logout(UserGroupInformation.java:1888)
>   at 
> org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1129)
>   at 
> org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1109)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1078)
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1060)
>   at 
> org.apache.hadoop.security.TestUGILoginFromKeytab.testReloginAfterFailedRelogin(TestUGILoginFromKeytab.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15291) TestMiniKdc fails on Java 9

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15291:

Fix Version/s: 3.1.0

> TestMiniKdc fails on Java 9
> ---
>
> Key: HADOOP-15291
> URL: https://issues.apache.org/jira/browse/HADOOP-15291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15291.1.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 
> s <<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time 
> elapsed: 1.301 s  <<< ERROR!
> javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
>   at 
> java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
>   at 
> jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
>   at 
> java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
>   at 
> java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
>   at 
> java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
>   at 
> org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15282) HADOOP-15235 broke TestHttpFSServerWebServer

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15282:

Fix Version/s: 3.1.0

> HADOOP-15235 broke TestHttpFSServerWebServer
> 
>
> Key: HADOOP-15282
> URL: https://issues.apache.org/jira/browse/HADOOP-15282
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15282.001.patch
>
>
> As [~xiaochen] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-15235?focusedCommentId=16375379=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16375379]
>  on HADOOP-15235, it broke {{TestHttpFSServerWebServer}}:
> {noformat}
> 2018-02-23 23:13:29,791 WARN  ServletHandler - /webhdfs/v1/
> java.lang.IllegalArgumentException: Empty key
>   at javax.crypto.spec.SecretKeySpec.(SecretKeySpec.java:96)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> java.lang.AssertionError: 
> Expected :500
> Actual   :200
>  
> {noformat}
> This only affects trunk because {{TestHttpFSServerWebServer}} doesn't exist 
> in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15235) Authentication Tokens should use HMAC instead of MAC

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15235:

Fix Version/s: 3.1.0

> Authentication Tokens should use HMAC instead of MAC
> 
>
> Key: HADOOP-15235
> URL: https://issues.apache.org/jira/browse/HADOOP-15235
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.2.0
>
> Attachments: HADOOP-15235.001.patch, HADOOP-15235.002.patch
>
>
> We currently use {{MessageDigest}} to compute a "SHA" MAC for signing 
> Authentication Tokens.  Firstly, what "SHA" maps to is dependent on the JVM 
> and Cryptography Provider.  While they _should_ do something reasonable, it's 
> probably a safer idea to pick a specific algorithm.  It looks like the Oracle 
> JVM picks SHA-1; though something like SHA-256 would be better.
> In any case, it would also be better to use an HMAC algorithm instead.
> Changing from SHA-1 to SHA-256 or MAC to HMAC won't generate equivalent 
> signatures, so this would normally be an incompatible change because the 
> server wouldn't accept previous tokens it issued with the older algorithm.  
> However, Authentication Tokens are used as a cheaper shortcut for Kerberos, 
> so it's expected for users to also have Kerberos credentials; in this case, 
> the Authentication Token will be rejected, but it will silently retry using 
> Kerberos, and get an updated token.  So this should all be transparent to the 
> user.
> And finally, the code where we verify a signature uses a non-constant-time 
> comparison, which could be subject to timing attacks.  I believe it would be 
> quite difficult in this case to do so, but we're probably better off using a 
> constant-time comparison.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15288:

Fix Version/s: 3.1.0

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.

2018-03-19 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405132#comment-16405132
 ] 

Larry McCay commented on HADOOP-15325:
--

Makes sense - +1 for the enhancement idea!

> Add an option to make Configuration.getPassword() not to fallback to read 
> passwords from configuration.
> ---
>
> Key: HADOOP-15325
> URL: https://issues.apache.org/jira/browse/HADOOP-15325
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> HADOOP-10607 added a public API Configuration.getPassword() which reads 
> passwords from credential provider and then falls back to reading from 
> configuration if one is not available.
> This API has been used throughout Hadoop codebase and downstream 
> applications. It is understandable for old password configuration keys to 
> fallback to configuration to maintain backward compatibility. But for new 
> configuration passwords that don't have legacy, there should be an option to 
> _not_ fallback, because storing passwords in configuration is considered a 
> bad security practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2018-03-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405100#comment-16405100
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

{quote}How about we provide an option for getPassword() so it optionally does 
not fall back to read password from configuration?
{quote}
Filed HADOOP-15325 for that.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.

2018-03-19 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15325:


 Summary: Add an option to make Configuration.getPassword() not to 
fallback to read passwords from configuration.
 Key: HADOOP-15325
 URL: https://issues.apache.org/jira/browse/HADOOP-15325
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.6.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


HADOOP-10607 added a public API Configuration.getPassword() which reads 
passwords from credential provider and then falls back to reading from 
configuration if one is not available.

This API has been used throughout Hadoop codebase and downstream applications. 
It is understandable for old password configuration keys to fallback to 
configuration to maintain backward compatibility. But for new configuration 
passwords that don't have legacy, there should be an option to _not_ fallback, 
because storing passwords in configuration is considered a bad security 
practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-03-19 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405073#comment-16405073
 ] 

Ajay Kumar commented on HADOOP-12760:
-

[~ajisakaa] ya, it seems import statement used in above approach will throw 
{{NoClassDefFoundError}}. Working around that will be clunky as well. For patch 
v06 CryptoStreamUtils L43, shall we add a logging statement? (Considering a 
probable scenario when we can't free native memory using cleaner )

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15314) Scheme assertion in S3Guard DynamoDBMetadataStore::checkPath is unnecessarily restrictive

2018-03-19 Thread DJ Hoffman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404986#comment-16404986
 ] 

DJ Hoffman commented on HADOOP-15314:
-

We're not dealing with EMR at all. We're running into this problem due to 
having many platforms, all of which share the same set of URL constants, some 
of which go through Hadoop and therefore via s3a, and others of which access s3 
via other AWS APIs. We're exploring what we can do internally, but we 
definitely appreciate you taking the time to respond here, and based on your 
comments so far we do think that we're not going to run into any large 
compatibility issues, which is a sigh of relief from our end. 

> Scheme assertion in S3Guard DynamoDBMetadataStore::checkPath is unnecessarily 
> restrictive
> -
>
> Key: HADOOP-15314
> URL: https://issues.apache.org/jira/browse/HADOOP-15314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: DJ Hoffman
>Priority: Major
>
> In version 3.0.0, the checkPath method for dealing with paths prevents us 
> from using the s3:// scheme when utilizing S3Guard. However, in our 
> core-site.xml we have included 
> {noformat}
>   
>     fs.s3.impl
>     org.apache.hadoop.fs.s3a.S3AFileSystem
>   {noformat}
> which should enforce that s3 prefixed paths go through s3a and are properly 
> compatible with s3guard. We removed the assertion that paths use the s3a 
> scheme (some of our paths use the s3 scheme) and our testing thus far with 
> S3Guard enabled have been positive. We believe the assertion in checkPath is 
> unnecessary and could be expanded to include the s3 and s3n schemes if not 
> dropped altogether or altered in some other way. We're happy to develop and 
> test a patch if the community is amenable to the change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2018-03-19 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404983#comment-16404983
 ] 

Rushabh S Shah commented on HADOOP-12125:
-

[~jzhuge]: I don't have enough cycles to work on this jira.
Please go ahead and re-assign if you plan to work on this.

> Retrying UnknownHostException on a proxy does not actually retry hostname 
> resolution
> 
>
> Key: HADOOP-12125
> URL: https://issues.apache.org/jira/browse/HADOOP-12125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Jason Lowe
>Assignee: Rushabh S Shah
>Priority: Major
>
> When RetryInvocationHandler attempts to retry an UnknownHostException the 
> hostname fails to be resolved again.  The InetSocketAddress in the 
> ConnectionId has cached the fact that the hostname is unresolvable, and when 
> the proxy tries to setup a new Connection object with that ConnectionId it 
> checks if the (cached) resolution result is unresolved and immediately throws.
> The end result is we sleep and retry for no benefit.  The hostname resolution 
> is never attempted again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2018-03-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404935#comment-16404935
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

[~shv] 

Thanks for your suggestion. In fact, if you look at the implementation of 
Configuration.getPassword(), 
{code:java}
/**
 * Get the value for a known password configuration element.
 * In order to enable the elimination of clear text passwords in config,
 * this method attempts to resolve the property name as an alias through
 * the CredentialProvider API and conditionally fallsback to config.
 * @param name property name
 * @return password
 */
public char[] getPassword(String name) throws IOException {
  char[] pass = null;

  pass = getPasswordFromCredentialProviders(name);

  if (pass == null) {
pass = getPasswordFromConfig(name);
  }

  return pass;
}
{code}
It first tries to get password from a credential file, which is encrypted with 
a password. It reads password from config only if the first step fails. So 
that's even secure than a plain text password file.

How about we provide an option for getPassword() so it optionally does not fall 
back to read password from configuration?

I want to propose this solution, because Cloudera Manager supports only reading 
passwords from credential files, which is in fact a superior approach from a 
security perspective. I am not sure how Ambari reads passwords though. 

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15319) hadoop fs -rm command misbehaves on recent hadoop version 2.8.2

2018-03-19 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-15319:

Summary: hadoop fs -rm command misbehaves on recent hadoop version 2.8.2  
(was: hadoop fs -rm command misbehaves on recent hadoop version 2.5.0)

> hadoop fs -rm command misbehaves on recent hadoop version 2.8.2
> ---
>
> Key: HADOOP-15319
> URL: https://issues.apache.org/jira/browse/HADOOP-15319
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.8.2
>Reporter: Saurabh Padhy
>Priority: Major
>
> This issue is regarding hadoop fs -rm command. 
> In hadoop version 2.4.0 when we execute "hadoop fs -rm /a/b/c/*",
> It removes the files inside the c directory only.
> But in case of versions higher to 2.8.2,
> When we execute "hadoop fs -rm /a/b/c/**" or "hdfs dfs -rm /a/b/c/**"
> It removes the inside files and directory as well.
> Please look into the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15319) hadoop fs -rm command misbehaves on recent hadoop version 2.5.0

2018-03-19 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404781#comment-16404781
 ] 

Rushabh S Shah edited comment on HADOOP-15319 at 3/19/18 1:10 PM:
--

[~saurabhpadhy]: Can you create some sample files and directories and post the 
result of {{hadoop fs -ls}} before and after the rm command.
 Also post the {{out stream}} after you run the {{rm}} command.

I am not able to reproduce the error that you are seeing.
That way I will replicate the same directory structure on my side and will run 
the exact same commands that you will run.
Please don't forget to quote the path on {{ls}} and {{rm}} commands.
  


was (Author: shahrs87):
[~saurabhpadhy]: Can you create some sample files and directories and post the 
result of {{hadoop fs -ls}} before and after the rm command.
Also post the {{out stream}} after you run the {{rm}} command.

I am not able to reproduce the error that you are seeing.
 

> hadoop fs -rm command misbehaves on recent hadoop version 2.5.0
> ---
>
> Key: HADOOP-15319
> URL: https://issues.apache.org/jira/browse/HADOOP-15319
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.8.2
>Reporter: Saurabh Padhy
>Priority: Major
>
> This issue is regarding hadoop fs -rm command. 
> In hadoop version 2.4.0 when we execute "hadoop fs -rm /a/b/c/*",
> It removes the files inside the c directory only.
> But in case of versions higher to 2.8.2,
> When we execute "hadoop fs -rm /a/b/c/**" or "hdfs dfs -rm /a/b/c/**"
> It removes the inside files and directory as well.
> Please look into the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15319) hadoop fs -rm command misbehaves on recent hadoop version 2.5.0

2018-03-19 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404781#comment-16404781
 ] 

Rushabh S Shah commented on HADOOP-15319:
-

[~saurabhpadhy]: Can you create some sample files and directories and post the 
result of {{hadoop fs -ls}} before and after the rm command.
Also post the {{out stream}} after you run the {{rm}} command.

I am not able to reproduce the error that you are seeing.
 

> hadoop fs -rm command misbehaves on recent hadoop version 2.5.0
> ---
>
> Key: HADOOP-15319
> URL: https://issues.apache.org/jira/browse/HADOOP-15319
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.8.2
>Reporter: Saurabh Padhy
>Priority: Major
>
> This issue is regarding hadoop fs -rm command. 
> In hadoop version 2.4.0 when we execute "hadoop fs -rm /a/b/c/*",
> It removes the files inside the c directory only.
> But in case of versions higher to 2.8.2,
> When we execute "hadoop fs -rm /a/b/c/**" or "hdfs dfs -rm /a/b/c/**"
> It removes the inside files and directory as well.
> Please look into the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15324) Wrong className in LoggerFactory.getLogger method

2018-03-19 Thread liuzhaokun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuzhaokun updated HADOOP-15324:

Attachment: HADOOP-15324.1.patch

> Wrong className in LoggerFactory.getLogger method
> -
>
> Key: HADOOP-15324
> URL: https://issues.apache.org/jira/browse/HADOOP-15324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: liuzhaokun
>Priority: Major
> Attachments: HADOOP-15324.1.patch
>
>
> If we use LoggerFactory.getLogger method in a class,the most accurate 
> approach is use this class as a parameter.So in this class,I think the 
> parameter should be Abstractverifier.class,or the log will be 
> SSLFactory.class' log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15324) Wrong className in LoggerFactory.getLogger method

2018-03-19 Thread liuzhaokun (JIRA)
liuzhaokun created HADOOP-15324:
---

 Summary: Wrong className in LoggerFactory.getLogger method
 Key: HADOOP-15324
 URL: https://issues.apache.org/jira/browse/HADOOP-15324
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.0.0
Reporter: liuzhaokun


If we use LoggerFactory.getLogger method in a class,the most accurate approach 
is use this class as a parameter.So in this class,I think the parameter should 
be Abstractverifier.class,or the log will be SSLFactory.class' log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore

2018-03-19 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15323:
-
Description: Aliyun OSS will support shallow copy which means server will 
only copy metadata when copy object operation occurs. So, we will improve 
multiCopy for AliyunOSSFileSystemStore at that time. 

> AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore
> -
>
> Key: HADOOP-15323
> URL: https://issues.apache.org/jira/browse/HADOOP-15323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Aliyun OSS will support shallow copy which means server will only copy 
> metadata when copy object operation occurs. So, we will improve multiCopy for 
> AliyunOSSFileSystemStore at that time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore

2018-03-19 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15323:
-
Summary: AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore  
(was: AliyunOSS: Improve multipartCopy from AliyunOSSFileSystemStore)

> AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore
> -
>
> Key: HADOOP-15323
> URL: https://issues.apache.org/jira/browse/HADOOP-15323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve multipartCopy from AliyunOSSFileSystemStore

2018-03-19 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15323:
-
Summary: AliyunOSS: Improve multipartCopy from AliyunOSSFileSystemStore  
(was: AliyunOSS: Remove multipartCopy from AliyunOSSFileSystemStore)

> AliyunOSS: Improve multipartCopy from AliyunOSSFileSystemStore
> --
>
> Key: HADOOP-15323
> URL: https://issues.apache.org/jira/browse/HADOOP-15323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15323) AliyunOSS: Remove multipartCopy from AliyunOSSFileSystemStore

2018-03-19 Thread wujinhu (JIRA)
wujinhu created HADOOP-15323:


 Summary: AliyunOSS: Remove multipartCopy from 
AliyunOSSFileSystemStore
 Key: HADOOP-15323
 URL: https://issues.apache.org/jira/browse/HADOOP-15323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/oss
Reporter: wujinhu
Assignee: wujinhu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404451#comment-16404451
 ] 

SammiChen edited comment on HADOOP-15262 at 3/19/18 7:46 AM:
-

+1.  Committed to trunk, branch-3.0, branch-2 and branch-2.9. Thanks [~wujinhu] 
's contribution. 


was (Author: sammi):
+1.  Committed to trunk, branch-2 and branch-2.9. Thanks [~wujinhu] 's 
contribution. 

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15262:
---
Fix Version/s: 3.0.2

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 3.2.0, 3.0.2
>
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404451#comment-16404451
 ] 

SammiChen commented on HADOOP-15262:


+1.  Committed to trunk, branch-2 and branch-2.9. Thanks [~wujinhu] 's 
contribution. 

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 3.2.0
>
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15262:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   2.9.1
   2.10.0
   Status: Resolved  (was: Patch Available)

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 3.2.0
>
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404440#comment-16404440
 ] 

Hudson commented on HADOOP-15262:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13854 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13854/])
HADOOP-15262. AliyunOSS: move files under a directory in parallel when 
(sammi.chen: rev d67a5e2dec5c60d96b0c216182891cdfd7832ac5)
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java


> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-03-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15262:
---
Summary: AliyunOSS: move files under a directory in parallel when rename a 
directory  (was: AliyunOSS: rename() to move files in a directory in parallel)

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org