[jira] [Commented] (HADOOP-14424) Add CRC32C performance test.

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011755#comment-16011755
 ] 

Hadoop QA commented on HADOOP-14424:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 21 new + 13 unchanged - 0 fixed = 34 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverControllerStress |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14424 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868216/HADOOP-14424.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19a3ed2cd857 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48f297 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12323/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12323/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12323/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12323/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12323/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011741#comment-16011741
 ] 

Rohith Sharma K S commented on HADOOP-14412:


I will commit trunk patch later of today if  no more objections. 
Branch-2-v2 patch jenkins has not triggered. 
Branch-2.8 patch looks good to me. 

> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, 
> HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch, 
> HADOOP-14412-branch-2.8.002.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011736#comment-16011736
 ] 

Yongjun Zhang commented on HADOOP-14407:


Hi [~omkarksa], 

Sorry for the delayed review, would you please update the patch to fix the 
issue reported here
https://issues.apache.org/jira/browse/HADOOP-14407?focusedCommentId=16011312=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16011312

Thanks.



> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14424) Add CRC32C performance test.

2017-05-15 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HADOOP-14424:
--
  Labels: test  (was: )
Assignee: LiXin Ge
  Status: Patch Available  (was: Open)

> Add CRC32C performance test.
> 
>
> Key: HADOOP-14424
> URL: https://issues.apache.org/jira/browse/HADOOP-14424
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: test
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14424.patch
>
>
> The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
> test to compare Crc32C chunked verification implementations.
> This test is based on Crc32PerformanceTest, what I have done in this test is:
> 1.CRC32C performance test.
> 2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it 
> from this test.
> 3.User can choose either direct buffer or non-directly buffer to run this 
> test manually.
> 4.Take use of verifyChunkedSumsByteArray for native to support non-directly 
> native test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14424) Add CRC32C performance test.

2017-05-15 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HADOOP-14424:
--
Attachment: HADOOP-14424.patch

> Add CRC32C performance test.
> 
>
> Key: HADOOP-14424
> URL: https://issues.apache.org/jira/browse/HADOOP-14424
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: LiXin Ge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14424.patch
>
>
> The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
> test to compare Crc32C chunked verification implementations.
> This test is based on Crc32PerformanceTest, what I have done in this test is:
> 1.CRC32C performance test.
> 2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it 
> from this test.
> 3.User can choose either direct buffer or non-directly buffer to run this 
> test manually.
> 4.Take use of verifyChunkedSumsByteArray for native to support non-directly 
> native test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011679#comment-16011679
 ] 

Hadoop QA commented on HADOOP-14412:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
10s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} root: The patch generated 0 new + 16 unchanged - 2 
fixed = 16 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
38s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}260m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_121 Failed junit tests | 

[jira] [Updated] (HADOOP-14424) Add CRC32C performance test.

2017-05-15 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HADOOP-14424:
--
Description: 
The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
test to compare Crc32C chunked verification implementations.
This test is based on Crc32PerformanceTest, what I have done in this test is:
1.CRC32C performance test.
2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it from 
this test.
3.User can choose either direct buffer or non-directly buffer to run this test 
manually.
4.Take use of verifyChunkedSumsByteArray for native to support non-directly 
native test.

  was:
The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
test to compare Crc32C chunked verification implementations.
This test is based on Crc32PerformanceTest, what I have done in this test is:
1.CRC32C performance test.
2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it from 
this test.
3.User can choose either direct buffer or non-directly buffer to run this test 
manually.


> Add CRC32C performance test.
> 
>
> Key: HADOOP-14424
> URL: https://issues.apache.org/jira/browse/HADOOP-14424
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: LiXin Ge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
>
> The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
> test to compare Crc32C chunked verification implementations.
> This test is based on Crc32PerformanceTest, what I have done in this test is:
> 1.CRC32C performance test.
> 2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it 
> from this test.
> 3.User can choose either direct buffer or non-directly buffer to run this 
> test manually.
> 4.Take use of verifyChunkedSumsByteArray for native to support non-directly 
> native test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14350) Relative path for Kerberos keytab is not working on IBM JDK

2017-05-15 Thread Wen Yuan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011660#comment-16011660
 ] 

Wen Yuan Chen commented on HADOOP-14350:


Actually we are running Spark job on yarn application. Spark will copy the 
keytab to its working directory and pass the relative path to the login method. 
We are not able to configure Spark to use absolute path.  

> Relative path for Kerberos keytab is not working on IBM JDK
> ---
>
> Key: HADOOP-14350
> URL: https://issues.apache.org/jira/browse/HADOOP-14350
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.7.3
> Environment: IBM JDK
>Reporter: Wen Yuan Chen
>
> For the sample code below:
> public class TestKrb {
>   public static void main(String[] args) throws IOException {
> String user = args[0], path = args[1];
> UserGroupInformation ugi = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, path);
> System.out.println("Login successfully");
>   }
> }
> When I use IBM JDK and pass a relative path for the Kerberos keytab, it will 
> throw error messages.  According to the debug log, it always tries to read 
> the keytab from the root path.  See the debug logs below:
> 2017-04-19 02:29:13,982 DEBUG 
> [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, value=[Rate of successful 
> kerberos logins and latency (milliseconds)], valueName=Time)
> 2017-04-19 02:29:13,990 DEBUG 
> [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, value=[Rate of failed kerberos 
> logins and latency (milliseconds)], valueName=Time)
> 2017-04-19 02:29:13,991 DEBUG 
> [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
> 2017-04-19 02:29:13,992 DEBUG 
> [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - UgiMetrics, User and 
> group related metrics
> [KRB_DBG_CFG] Config:main:   Java config file: 
> /opt/ibm/java/jre/lib/security/krb5.conf
> [KRB_DBG_CFG] Config:main:   Loaded from Java config
> 2017-04-19 02:29:14,175 DEBUG [org.apache.hadoop.security.Groups] -  Creating 
> new Groups object
> 2017-04-19 02:29:14,178 DEBUG [org.apache.hadoop.util.NativeCodeLoader] - 
> Trying to load the custom-built native-hadoop library...
> 2017-04-19 02:29:14,179 DEBUG [org.apache.hadoop.util.NativeCodeLoader] - 
> Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: 
> hadoop (Not found in java.library.path)
> 2017-04-19 02:29:14,179 DEBUG [org.apache.hadoop.util.NativeCodeLoader] - 
> java.library.path=/opt/ibm/java/jre/lib/amd64/compressedrefs:/opt/ibm/java/jre/lib/amd64:/usr/lib64:/usr/lib
> 2017-04-19 02:29:14,179 WARN [org.apache.hadoop.util.NativeCodeLoader] - 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-04-19 02:29:14,180 DEBUG 
> [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] - Falling 
> back to shell based
> 2017-04-19 02:29:14,180 DEBUG 
> [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] - Group 
> mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
> 2017-04-19 02:29:14,334 DEBUG [org.apache.hadoop.util.Shell] - setsid exited 
> with exit code 0
> 2017-04-19 02:29:14,334 DEBUG [org.apache.hadoop.security.Groups] - Group 
> mapping 
> impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; 
> cacheTimeout=30; warningDeltaMs=5000
> IBMJGSSProvider Build-Level: -20161128
> [JGSS_DBG_CRED]  main JAAS config: principal=job/analytics
> [JGSS_DBG_CRED]  main JAAS config: credsType=initiate and accept
> [JGSS_DBG_CRED]  main config: useDefaultCcache=false
> [JGSS_DBG_CRED]  main config: useCcache=null
> [JGSS_DBG_CRED]  main config: useDefaultKeytab=false
> [JGSS_DBG_CRED]  main config: useKeytab=//job.keytab
> [JGSS_DBG_CRED]  main JAAS config: forwardable=false (default)
> [JGSS_DBG_CRED]  main JAAS config: renewable=false (default)
> [JGSS_DBG_CRED]  main JAAS config: proxiable=false (default)
> [JGSS_DBG_CRED]  main JAAS config: tryFirstPass=false 

[jira] [Updated] (HADOOP-14417) Update default SSL cipher list for KMS

2017-05-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14417:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.

Thanks [~eddyxu], [~rkanter], and [~andrew.wang] for review and commit.

> Update default SSL cipher list for KMS
> --
>
> Key: HADOOP-14417
> URL: https://issues.apache.org/jira/browse/HADOOP-14417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14417.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14417) Update default SSL cipher list for KMS

2017-05-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14417:

Summary: Update default SSL cipher list for KMS  (was: Update cipher list 
for KMS)

> Update default SSL cipher list for KMS
> --
>
> Key: HADOOP-14417
> URL: https://issues.apache.org/jira/browse/HADOOP-14417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14417.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011491#comment-16011491
 ] 

Mingliang Liu commented on HADOOP-14416:


The checkstyles seem related. Can you fix them if applicable?

Thanks,

> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
> Attachments: HADOOP-14416.001.patch, HADOOP-14416.001.patch, 
> Non-SecureRun-Logs.txt, SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14417) Update cipher list for KMS

2017-05-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011459#comment-16011459
 ] 

Andrew Wang commented on HADOOP-14417:
--

Sure, seems fine. I'm +0.

> Update cipher list for KMS
> --
>
> Key: HADOOP-14417
> URL: https://issues.apache.org/jira/browse/HADOOP-14417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14417.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14425) Add more s3guard metrics

2017-05-15 Thread Ai Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011454#comment-16011454
 ] 

Ai Deng commented on HADOOP-14425:
--

Hi [~ste...@apache.org], i think the metrics about "mismatches between s3guard 
and the underlying object store" you mentioned is a little difficult to add.
Below is my understanding, please correct me if I'm wrong:
in S3Guard design, the metadataStore is the source of truth, if the path is 
marked as "Authoritative" and has the status in metadataStore, we return the 
status from metaDataStore directly, but in S3mper, the source of the truth is 
S3, the list path action always check with S3, so the S3mper could find the 
mismatches. 

We can discuss this and other metrics further before start to add them.

> Add more s3guard metrics
> 
>
> Key: HADOOP-14425
> URL: https://issues.apache.org/jira/browse/HADOOP-14425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ai Deng
>
> The metrics suggested to add:
> Status:
> S3GUARD_METADATASTORE_ENABLED
> S3GUARD_METADATASTORE_IS_AUTHORITATIVE
> Operations:
> S3GUARD_METADATASTORE_INITIALIZATION
> S3GUARD_METADATASTORE_DELETE_PATH
> S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
> S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
> S3GUARD_METADATASTORE_GET_PATH
> S3GUARD_METADATASTORE_GET_PATH_LATENCY
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
> S3GUARD_METADATASTORE_MOVE_PATH
> S3GUARD_METADATASTORE_PUT_PATH
> S3GUARD_METADATASTORE_PUT_PATH_LATENCY
> S3GUARD_METADATASTORE_CLOSE
> S3GUARD_METADATASTORE_DESTORY
> From S3Guard:
> S3GUARD_METADATASTORE_MERGE_DIRECTORY
> For the failures:
> S3GUARD_METADATASTORE_DELETE_FAILURE
> S3GUARD_METADATASTORE_GET_FAILURE
> S3GUARD_METADATASTORE_PUT_FAILURE
> Etc:
> S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14425) Add more s3guard metrics

2017-05-15 Thread Ai Deng (JIRA)
Ai Deng created HADOOP-14425:


 Summary: Add more s3guard metrics
 Key: HADOOP-14425
 URL: https://issues.apache.org/jira/browse/HADOOP-14425
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ai Deng


The metrics suggested to add:

Status:
S3GUARD_METADATASTORE_ENABLED
S3GUARD_METADATASTORE_IS_AUTHORITATIVE
Operations:
S3GUARD_METADATASTORE_INITIALIZATION
S3GUARD_METADATASTORE_DELETE_PATH
S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
S3GUARD_METADATASTORE_GET_PATH
S3GUARD_METADATASTORE_GET_PATH_LATENCY
S3GUARD_METADATASTORE_GET_CHILDREN_PATH
S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
S3GUARD_METADATASTORE_MOVE_PATH
S3GUARD_METADATASTORE_PUT_PATH
S3GUARD_METADATASTORE_PUT_PATH_LATENCY
S3GUARD_METADATASTORE_CLOSE
S3GUARD_METADATASTORE_DESTORY
>From S3Guard:
S3GUARD_METADATASTORE_MERGE_DIRECTORY
For the failures:
S3GUARD_METADATASTORE_DELETE_FAILURE
S3GUARD_METADATASTORE_GET_FAILURE
S3GUARD_METADATASTORE_PUT_FAILURE
Etc:
S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups

2017-05-15 Thread Sarah Victor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarah Victor updated HADOOP-11436:
--
Affects Version/s: 3.0.0-alpha2
 Target Version/s: 3.0.0-alpha3
   Status: Patch Available  (was: In Progress)

> HarFileSystem does not preserve permission, users and groups
> 
>
> Key: HADOOP-11436
> URL: https://issues.apache.org/jira/browse/HADOOP-11436
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: John George
>Assignee: Sarah Victor
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch
>
>
> HARFileSystem  does not preserve permission, users or groups. The archive 
> itself has these stored, but the HarFileSystem ignores these.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14412:

Attachment: HADOOP-14412-branch-2.8.002.patch

And the corresponding patch for branch-2.8.

> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, 
> HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch, 
> HADOOP-14412-branch-2.8.002.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14412:

Attachment: HADOOP-14412-branch-2.002.patch

The last checkstyle issue is asking to put a period on something that isn't a 
sentence, so I'm ignoring that one.  The unit test failures appear to be 
unrelated.

Uploading an equivalent patch for branch-2.

> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, 
> HADOOP-14412-branch-2.001.patch, HADOOP-14412-branch-2.002.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2017-05-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011410#comment-16011410
 ] 

Chen Liang commented on HADOOP-14415:
-

The failed test and the findbug warnings are unrelated.

> Use java.lang.AssertionError instead of junit.framework.AssertionFailedError
> 
>
> Key: HADOOP-14415
> URL: https://issues.apache.org/jira/browse/HADOOP-14415
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14415.001.patch
>
>
> When reviewing HADOOP-14180, I found some test codes throw 
> junit.framework.AssertionFailedError. org.junit.Assert no longer throws 
> AssertionFailedError, so we should use AssertionError instead of 
> AssertionFailedError.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011399#comment-16011399
 ] 

Hadoop QA commented on HADOOP-14412:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 1 new + 26 unchanged - 
2 fixed = 27 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14412 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868118/HADOOP-14412.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8df0076b79d3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48f297 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12316/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12316/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011392#comment-16011392
 ] 

Hadoop QA commented on HADOOP-14416:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
5s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 5 
new + 39 unchanged - 3 fixed = 44 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868158/HADOOP-14416.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c8cf75a6df74 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48f297 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12321/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12321/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12321/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  

[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011364#comment-16011364
 ] 

Hadoop QA commented on HADOOP-13786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
 2s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m  0s{color} 
| {color:red} root generated 12 new + 777 unchanged - 1 fixed = 789 total (was 
778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 101 new + 100 
unchanged - 23 fixed = 201 total (was 123) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 21 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-registry in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011323#comment-16011323
 ] 

Mingliang Liu commented on HADOOP-14416:


+1

I re-triggered the Jenkins pre-commit run.

> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
> Attachments: HADOOP-14416.001.patch, HADOOP-14416.001.patch, 
> Non-SecureRun-Logs.txt, SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14416:
---
Attachment: HADOOP-14416.001.patch

> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
> Attachments: HADOOP-14416.001.patch, HADOOP-14416.001.patch, 
> Non-SecureRun-Logs.txt, SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011313#comment-16011313
 ] 

Hadoop QA commented on HADOOP-14416:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-14416 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867776/Non-SecureRun-Logs.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12320/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
> Attachments: HADOOP-14416.001.patch, Non-SecureRun-Logs.txt, 
> SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011312#comment-16011312
 ] 

Hadoop QA commented on HADOOP-14407:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-14407 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14407 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867756/HADOOP-14407.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12319/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011309#comment-16011309
 ] 

Hadoop QA commented on HADOOP-14415:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 19s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868123/HADOOP-14415.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 044492f98db6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48f297 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12315/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12315/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12315/testReport/ |
| modules | C: 

[jira] [Updated] (HADOOP-14416) Path not resolving correctly while authorizing with WASB-Ranger when it starts with 'wasb:///' (triple-slash)

2017-05-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14416:
---
Status: Patch Available  (was: Open)

> Path not resolving correctly while authorizing with WASB-Ranger when it 
> starts with 'wasb:///' (triple-slash)
> -
>
> Key: HADOOP-14416
> URL: https://issues.apache.org/jira/browse/HADOOP-14416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, security
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
> Attachments: HADOOP-14416.001.patch, Non-SecureRun-Logs.txt, 
> SecureRunLogs.txt
>
>
> Bug found while launching spark-shell.
> Repro-steps : 
> 1. Create a spark cluster with wasb-acls enabled.
> 2. Change spark history log directory configurations to 
> wasb:///hdp/spark2-events
> 3. Launching the spark shell should fail.
> The above scenario works fine with clusters that dont have wasb-acl 
> authorization enabled.
> Note : wasb:/// resolves correctly on fs shell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14417) Update cipher list for KMS

2017-05-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011239#comment-16011239
 ] 

John Zhuge commented on HADOOP-14417:
-

HADOOP-14083 is committed to branch-2 only, thus not in any release. This JIRA 
will be committed to branch-2 only as well, so it is not an incompatible change.

> Update cipher list for KMS
> --
>
> Key: HADOOP-14417
> URL: https://issues.apache.org/jira/browse/HADOOP-14417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14417.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups

2017-05-15 Thread Sarah Victor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarah Victor updated HADOOP-11436:
--
Assignee: Sarah Victor  (was: John George)
  Status: In Progress  (was: Patch Available)

> HarFileSystem does not preserve permission, users and groups
> 
>
> Key: HADOOP-11436
> URL: https://issues.apache.org/jira/browse/HADOOP-11436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: John George
>Assignee: Sarah Victor
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch
>
>
> HARFileSystem  does not preserve permission, users or groups. The archive 
> itself has these stored, but the HarFileSystem ignores these.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups

2017-05-15 Thread Sarah Victor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011232#comment-16011232
 ] 

Sarah Victor commented on HADOOP-11436:
---

We've added a couple of tests to TestHadoopArchives.java rather than 
TestHarFileSystemBasics.

TestHadoopArchives also tests HarFileSystem and also does archiving which we'd 
like to do as part of the test.

The two tests test archive extraction as privileged user (supergroup) and 
non-privileged user. Extracting the archive as privileged user preserves file 
ownership and permissions.

> HarFileSystem does not preserve permission, users and groups
> 
>
> Key: HADOOP-11436
> URL: https://issues.apache.org/jira/browse/HADOOP-11436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: John George
>Assignee: John George
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch
>
>
> HARFileSystem  does not preserve permission, users or groups. The archive 
> itself has these stored, but the HarFileSystem ignores these.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reopened HADOOP-14407:


> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-14407:
---
Status: Patch Available  (was: Reopened)

> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups

2017-05-15 Thread Sarah Victor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarah Victor updated HADOOP-11436:
--
Attachment: HADOOP-11436.2.patch

> HarFileSystem does not preserve permission, users and groups
> 
>
> Key: HADOOP-11436
> URL: https://issues.apache.org/jira/browse/HADOOP-11436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: John George
>Assignee: John George
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch
>
>
> HARFileSystem  does not preserve permission, users or groups. The archive 
> itself has these stored, but the HarFileSystem ignores these.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-14407.

Resolution: Information Provided

> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011224#comment-16011224
 ] 

Hadoop QA commented on HADOOP-14422:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
57s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14422 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868115/HADOOP-14422-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cb7b4a0c8087 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48f297 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12314/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12314/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12314/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: 

[jira] [Commented] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011146#comment-16011146
 ] 

Hadoop QA commented on HADOOP-14412:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 24s{color} | {color:orange} root: The patch generated 9 new + 24 unchanged - 
2 fixed = 33 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
41s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor |
|   | 

[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Patch Available  (was: Open)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011072#comment-16011072
 ] 

Steve Loughran commented on HADOOP-13786:
-

Patch 026

Key changes: 
# tried to address Aarons comments; 
# implemented Magic Committer with MR test to validate.

h3. review changes

mostly commented on above. I've pulled WriteOperationsHelper out, and added a 
new class {{AwsCall}} which takes a closure and executes, it translating 
exceptions
{code}
   T execute(String action, String path, Operation operation)
  throws IOException {
try {
  return operation.execute();
} catch (AmazonClientException e) {
  throw S3AUtils.translateException(action, path, e);
}
  }
{code}
This is where we can add retry logic, throttling, etc. I've not done that, just 
lined things up for it to go in across the module. Example of use
{code}
calls.execute("upload part",
request.getKey(),
() -> owner.uploadPart(request));
{code}

h3. Magic Committer

works, at least as far as the IT Tests go. Done by pulling up all the staging 
code from the base class and switching to that logic for execution of operations

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2017-05-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14415:

Status: Patch Available  (was: Open)

> Use java.lang.AssertionError instead of junit.framework.AssertionFailedError
> 
>
> Key: HADOOP-14415
> URL: https://issues.apache.org/jira/browse/HADOOP-14415
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14415.001.patch
>
>
> When reviewing HADOOP-14180, I found some test codes throw 
> junit.framework.AssertionFailedError. org.junit.Assert no longer throws 
> AssertionFailedError, so we should use AssertionError instead of 
> AssertionFailedError.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2017-05-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14415:

Attachment: HADOOP-14415.001.patch

Post v001 patch to replace {{AssertionFailedError}} with {{AssertionError}}

> Use java.lang.AssertionError instead of junit.framework.AssertionFailedError
> 
>
> Key: HADOOP-14415
> URL: https://issues.apache.org/jira/browse/HADOOP-14415
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14415.001.patch
>
>
> When reviewing HADOOP-14180, I found some test codes throw 
> junit.framework.AssertionFailedError. org.junit.Assert no longer throws 
> AssertionFailedError, so we should use AssertionError instead of 
> AssertionFailedError.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Open  (was: Patch Available)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-026.patch

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14412:

Attachment: HADOOP-14412.002.patch

A new trunk patch version that should address the checkstyle issues.

> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-14412.001.patch, HADOOP-14412.002.patch, 
> HADOOP-14412-branch-2.001.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: HADOOP-14422-002.patch

modify code to fix junit errors, code style. Add no extra test method just 
because i only modified the FTPFileSystem initialize method. All fixed in file
HADOOP-14422-002.patch

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422-002.patch, HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Comment: was deleted

(was: modify code to fix junit errors, code style. Add no extra test method 
just because i only modified the FTPFileSystem initialize method)

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422-002.patch, HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14415) Use java.lang.AssertionError instead of junit.framework.AssertionFailedError

2017-05-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HADOOP-14415:
---

Assignee: Chen Liang

> Use java.lang.AssertionError instead of junit.framework.AssertionFailedError
> 
>
> Key: HADOOP-14415
> URL: https://issues.apache.org/jira/browse/HADOOP-14415
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
>
> When reviewing HADOOP-14180, I found some test codes throw 
> junit.framework.AssertionFailedError. org.junit.Assert no longer throws 
> AssertionFailedError, so we should use AssertionError instead of 
> AssertionFailedError.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: (was: HADOOP-14422-002.patch)

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: HADOOP-14422-002.patch

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422-002.patch, HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: (was: HADOOP-14422-V2.patch)

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: HADOOP-14422-V2.patch

modify code to fix junit errors, code style. Add no extra test method just 
because i only modified the FTPFileSystem initialize method

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422.patch, HADOOP-14422-V2.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14417) Update cipher list for KMS

2017-05-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010741#comment-16010741
 ] 

John Zhuge commented on HADOOP-14417:
-

[~andrew.wang] or [~steve_l], could you please comment?

The default cipher list was added in HADOOP-14083. This patch removes 5 DHE 
ciphers from the default list. I have verified the removal does not break 
HADOOP-14083.

> Update cipher list for KMS
> --
>
> Key: HADOOP-14417
> URL: https://issues.apache.org/jira/browse/HADOOP-14417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14417.branch-2.001.patch
>
>
> In Oracle Linux 6.8 configurations, the curl command cannot connect to 
> certain CDH services that run on Apache Tomcat when the cluster has been 
> configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services 
> reject connection attempts because the default cipher configuration uses weak 
> temporary server keys (based on Diffie-Hellman key exchange protocol).
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking

2017-05-15 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010721#comment-16010721
 ] 

Sean Mackrory commented on HADOOP-13760:


Attached an updated patch incorporating *most* of Fabbri's feedback and that is 
passing all tests. The encryption test issue seems to have disappeared (fairly 
certain no change I made fixed that) and the remaining failures were due to a 
bug when creating parent directories in the metadata store. The logic that 
traversed back up the tree and stopped when it encounted a directory that 
already existed was not tombstone-aware. So it would stop that point, and other 
code that traversed the tree from the root would stop on the other side of the 
same node. So some tests would leave the metadata store in a state that 
subsequent tests couldn't work properly with because there was a subset of the 
tree that was hidden from them and that they couldn't break through into. That 
is fixed.

>> Curious if we can combine three deleted item maps to a single map holding a 
>> struct instead...

Turns out we can. I was trying to follow what we currently did with puts as 
much as it made sense, and just add additional data structures for the 
differences. In the end, once I converted everything to use a single HashMap of 
structs, the rest of the code got a bit cleaner, so I think this is a good 
change.

>> Why create a temporary HashSet just to iterate over keys?

Avoiding ConcurrentModificationException

>> (1) can this add duplicates? (2) How do you know this delayed item (e.g. 
>> /a/b/c) belongs in this listing request (e.g. list /x/y/x)?

Yeah both of these problems could exist, and I'm guessing simply don't show up 
because of the limited, contrived scenario used in the tests that use this. 
Fixed (2) in the latest patch. Forgot about (1) but will address next.


I have no idea why I added the authoritative stuff now that I review it. Agree 
with your comments, and reverting those changes.


I'm going to take a look at refactoring my s3GetFileStatus changes. Your point 
about it being S3-specific is well taken, but the decision that involves 
tombstones needs access to more of the internal state of s3GetFileStatus than 
is returned, so it'll need some rework.

Thanks for the review!

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14412:

Attachment: HADOOP-14412-branch-2.001.patch

Thanks for the review, Rohith!  Here's a patch for branch-2 which is needed to 
fix a few JDK8-isms in the trunk patch.  I'll followup shortly with a 
branch-2.8 patch which is needed to fix a unit tests that broke with this 
change in that branch.

> HostsFileReader#getHostDetails is very expensive on large clusters
> --
>
> Key: HADOOP-14412
> URL: https://issues.apache.org/jira/browse/HADOOP-14412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-14412.001.patch, HADOOP-14412-branch-2.001.patch
>
>
> After upgrading one of our large clusters to 2.8 we noticed many IPC server 
> threads of the resourcemanager spending time in NodesListManager#isValidNode 
> which in turn was calling HostsFileReader#getHostDetails.  The latter is 
> creating complete copies of the include and exclude sets for every node 
> heartbeat, and these sets are not small due to the size of the cluster.  
> These copies are causing multiple resizes of the underlying HashSets being 
> filled and creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13760) S3Guard: add delete tracking

2017-05-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13760:
---
Attachment: HADOOP-13760-HADOOP-13345.007.patch

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7464) hadoop fs -stat '{glob}' gives null with combo of absolute and non-existent files

2017-05-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-7464:
-
Description: 
I'm trying to {{hadoop fs -stat}} a list of HDFS files all at once, because 
doing them one at a time is slow.  stat doesn't accept multiple arguments, so 
I'm using a glob of the form '\{file1,file2\}' (quoted from the shell).  I've 
discovered this doesn't work for me because the glob expands non-existent files 
to nothing, and I get nothing back from stat.  It would be nice to be able to 
use stat for this, but perhaps that's more of a feature request.

However, in the process, I discovered that with relative pathnames, I get back 
the stats for the existing files.  With absolute filenames, I get back {{stat: 
null}}.  


$ hadoop fs -touchz file1 file2
$ hadoop fs -stat '\{file1,file2\}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '\{file1,file2,nonexistent\}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '\{user/me/file1,/user/me/file2\}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '\{/user/me/file1,/user/me/file2,nonexistent\}'
stat: null


Perhaps I'm doing something dumb, but it seems like stat should give the same 
results whether you use relative or absolute paths.

  was:
I'm trying to {{hadoop fs -stat}} a list of HDFS files all at once, because 
doing them one at a time is slow.  stat doesn't accept multiple arguments, so 
I'm using a glob of the form '{file1,file2}' (quoted from the shell).  I've 
discovered this doesn't work for me because the glob expands non-existent files 
to nothing, and I get nothing back from stat.  It would be nice to be able to 
use stat for this, but perhaps that's more of a feature request.

However, in the process, I discovered that with relative pathnames, I get back 
the stats for the existing files.  With absolute filenames, I get back {{stat: 
null}}.  


$ hadoop fs -touchz file1 file2
$ hadoop fs -stat '{file1,file2}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '{file1,file2,nonexistent}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '{/user/me/file1,/user/me/file2}'
2011-07-15 21:21:19
2011-07-15 21:21:19
$ hadoop fs -stat '{/user/me/file1,/user/me/file2,nonexistent}'
stat: null


Perhaps I'm doing something dumb, but it seems like stat should give the same 
results whether you use relative or absolute paths.


> hadoop fs -stat '{glob}' gives null with combo of absolute and non-existent 
> files
> -
>
> Key: HADOOP-7464
> URL: https://issues.apache.org/jira/browse/HADOOP-7464
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2
> Environment: CDH3u0
>Reporter: Jay Hacker
>Priority: Minor
>
> I'm trying to {{hadoop fs -stat}} a list of HDFS files all at once, because 
> doing them one at a time is slow.  stat doesn't accept multiple arguments, so 
> I'm using a glob of the form '\{file1,file2\}' (quoted from the shell).  I've 
> discovered this doesn't work for me because the glob expands non-existent 
> files to nothing, and I get nothing back from stat.  It would be nice to be 
> able to use stat for this, but perhaps that's more of a feature request.
> However, in the process, I discovered that with relative pathnames, I get 
> back the stats for the existing files.  With absolute filenames, I get back 
> {{stat: null}}.  
> $ hadoop fs -touchz file1 file2
> $ hadoop fs -stat '\{file1,file2\}'
> 2011-07-15 21:21:19
> 2011-07-15 21:21:19
> $ hadoop fs -stat '\{file1,file2,nonexistent\}'
> 2011-07-15 21:21:19
> 2011-07-15 21:21:19
> $ hadoop fs -stat '\{user/me/file1,/user/me/file2\}'
> 2011-07-15 21:21:19
> 2011-07-15 21:21:19
> $ hadoop fs -stat '\{/user/me/file1,/user/me/file2,nonexistent\}'
> stat: null
> Perhaps I'm doing something dumb, but it seems like stat should give the same 
> results whether you use relative or absolute paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12005) Switch off checkstyle file length warnings

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010616#comment-16010616
 ] 

Hadoop QA commented on HADOOP-12005:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-12005 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12005 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12752792/HADOOP-12005.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12312/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Switch off checkstyle file length warnings
> --
>
> Key: HADOOP-12005
> URL: https://issues.apache.org/jira/browse/HADOOP-12005
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-12005.001.patch
>
>
> We have many large files over 2000 lines. checkstyle warns every time there 
> is a change to one of these files.
> Let's switch off this check or increase the limit to reduce the number of 
> non-actionable -1s from Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed

2017-05-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010599#comment-16010599
 ] 

John Zhuge commented on HADOOP-14421:
-

Timestamp for triage:
{noformat}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 6:42.886s
[INFO] Finished at: Sun May 14 01:17:42 UTC 2017
{noformat}

> TestAdlFileSystemContractLive#testListStatus assertion failed
> -
>
> Key: HADOOP-14421
> URL: https://issues.apache.org/jira/browse/HADOOP-14421
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Atul Sikaria
>
> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273
>  expected:<1> but was:<11>
> {noformat}
> Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)  
> Time elapsed: 0.518 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<1> but was:<11>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60)
> {noformat}
> This is the first time we saw the issue. The test store {{rwj2dm}} was 
> created on the fly and destroyed after the test.
> The code base does not have HADOOP-14230 which cleans up the test dir better. 
> Trying to determine whether this might help.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12005) Switch off checkstyle file length warnings

2017-05-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-12005:
-

Assignee: Andras Bokor

> Switch off checkstyle file length warnings
> --
>
> Key: HADOOP-12005
> URL: https://issues.apache.org/jira/browse/HADOOP-12005
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-12005.001.patch
>
>
> We have many large files over 2000 lines. checkstyle warns every time there 
> is a change to one of these files.
> Let's switch off this check or increase the limit to reduce the number of 
> non-actionable -1s from Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-11869) checkstyle rules/script need re-visiting

2017-05-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-11869:
-

Assignee: Andras Bokor

> checkstyle rules/script need re-visiting
> 
>
> Key: HADOOP-11869
> URL: https://issues.apache.org/jira/browse/HADOOP-11869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Andras Bokor
> Attachments: HADOOP-11869.ParameterNumber.patch
>
>
> There seem to be a lot of arcane errors being caused by the checkstyle 
> rules/script. Real issues tend to be buried in this noise. Some examples :
> 1. "Line is longer than 80 characters" - this shows up even for cases like 
> import statements, package names
> 2. "Missing a Javadoc comment." - for every private member including cases 
> like "Configuration conf". 
> Having rules like these will result in a large number of pre-commit job 
> failures. We should fine tune the rules used for checkstyle. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11869) checkstyle rules/script need re-visiting

2017-05-15 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010584#comment-16010584
 ] 

Andras Bokor commented on HADOOP-11869:
---

bq. Line is longer than 80 characters
Fixed by HADOOP-13603.

bq. "Missing a Javadoc comment." - for every private member including cases 
like "Configuration conf". 
Currently only {{JavadocType}} is turned on. So checkstyle checks Javadoc 
comments for class and interface definitions only.

> checkstyle rules/script need re-visiting
> 
>
> Key: HADOOP-11869
> URL: https://issues.apache.org/jira/browse/HADOOP-11869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
> Attachments: HADOOP-11869.ParameterNumber.patch
>
>
> There seem to be a lot of arcane errors being caused by the checkstyle 
> rules/script. Real issues tend to be buried in this noise. Some examples :
> 1. "Line is longer than 80 characters" - this shows up even for cases like 
> import statements, package names
> 2. "Missing a Javadoc comment." - for every private member including cases 
> like "Configuration conf". 
> Having rules like these will result in a large number of pre-commit job 
> failures. We should fine tune the rules used for checkstyle. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010551#comment-16010551
 ] 

Hadoop QA commented on HADOOP-14422:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
35s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 6 unchanged - 0 fixed = 14 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 36s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Dead store to builder in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getPort(String, int, int)  At 
FTPFileSystem.java:org.apache.hadoop.fs.ftp.FTPFileSystem.getPort(String, int, 
int)  At FTPFileSystem.java:[line 172] |
| Failed junit tests | hadoop.fs.TestDelegateToFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14422 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868064/HADOOP-14422.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a3f940da1b36 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6600abb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12311/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 

[jira] [Assigned] (HADOOP-6377) ChecksumFileSystem.getContentSummary throws NPE when directory contains inaccessible directories

2017-05-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-6377:


Assignee: Andras Bokor

> ChecksumFileSystem.getContentSummary throws NPE when directory contains 
> inaccessible directories
> 
>
> Key: HADOOP-6377
> URL: https://issues.apache.org/jira/browse/HADOOP-6377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
>
> When getContentSummary is called on a path that contains an unreadable 
> directory, it throws NPE, since RawLocalFileSystem.listStatus(Path) returns 
> null when File.list() returns null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-9069) FileSystem.get leads to stack overflow if default FS is not configured with a scheme

2017-05-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-9069:


Assignee: Andras Bokor

> FileSystem.get leads to stack overflow if default FS is not configured with a 
> scheme
> 
>
> Key: HADOOP-9069
> URL: https://issues.apache.org/jira/browse/HADOOP-9069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.3, 2.0.1-alpha
>Reporter: Jason Lowe
>Assignee: Andras Bokor
>Priority: Minor
>
> If fs.defaultFS is configured without a scheme, e.g.: "/", then 
> FileSystem.get will infinitely recurse and lead to a stack overflow.  An 
> example stacktrace from 0.23:
> {noformat}
> java.lang.StackOverflowError
> at java.util.AbstractCollection.(AbstractCollection.java:66)
> at java.util.AbstractList.(AbstractList.java:76)
> at java.util.ArrayList.(ArrayList.java:128)
> at java.util.ArrayList.(ArrayList.java:139)
> at 
> org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:430)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:852)
> at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:171)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Attachment: HADOOP-14422.patch

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Status: Patch Available  (was: Open)

reslove this issue

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14422.patch
>
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8741) Broken links from "Cluster setup" to *-default.html

2017-05-15 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010416#comment-16010416
 ] 

Andras Bokor commented on HADOOP-8741:
--

On version 2.x and above this is no longer an issue.
The affected documentations are 1.0.4 and 1.2.1. These versions are EOL.

> Broken links from "Cluster setup" to *-default.html
> ---
>
> Key: HADOOP-8741
> URL: https://issues.apache.org/jira/browse/HADOOP-8741
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3
>Reporter: Bertrand Dechoux
>Priority: Minor
>  Labels: documentation
>
> Hi,
> The links from the cluster setup pages to the configuration files are broken.
> http://hadoop.apache.org/common/docs/stable/cluster_setup.html
> Read-only default configuration
> http://hadoop.apache.org/common/docs/current/core-default.html
> should be
> http://hadoop.apache.org/common/docs/r1.0.3/core-default.html
> The same holds for the three configuration : core, hdfs and mapred.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14424) Add CRC32C performance test.

2017-05-15 Thread LiXin Ge (JIRA)
LiXin Ge created HADOOP-14424:
-

 Summary: Add CRC32C performance test.
 Key: HADOOP-14424
 URL: https://issues.apache.org/jira/browse/HADOOP-14424
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.0.0-alpha2
Reporter: LiXin Ge
Priority: Minor
 Fix For: 3.0.0-alpha2


The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
test to compare Crc32C chunked verification implementations.
This test is based on Crc32PerformanceTest, what I have done in this test is:
1.CRC32C performance test.
2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it from 
this test.
3.User can choose either direct buffer or non-directly buffer to run this test 
manually.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Status: Open  (was: Patch Available)

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Status: Patch Available  (was: Open)

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14423) s3guard will set file length to -1 on a putObjectDirect(stream, -1) call

2017-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14423:

Priority: Minor  (was: Major)

rating as minor as the output streams don't normally pass in -1 as a length

> s3guard will set file length to -1 on a putObjectDirect(stream, -1) call
> 
>
> Key: HADOOP-14423
> URL: https://issues.apache.org/jira/browse/HADOOP-14423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Priority: Minor
>
> You can pass a negative number into {{S3AFileSystem.putObjectDirect}}, which 
> means "put until the end of the stream". S3guard has been using this {{len}} 
> argument: it needs to be using the actual number of bytes uploaded. Also 
> relevant with client side encryption, when the amount of data put > the 
> amount of data in the file or stream.
> Noted in the committer branch after I added some more assertions, I've 
> changed it there so making changes to S3AFS.putObjectDirect to pull the 
> content length to pass in to finishedWrite() from the {{PutObjectResult}} 
> instead. This can be picked into the s3guard branch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14423) s3guard will set file length to -1 on a putObjectDirect(stream, -1) call

2017-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010355#comment-16010355
 ] 

Steve Loughran commented on HADOOP-14423:
-

Stack which won't quite match s3guard or be reproducible there
{code}
Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.154 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
testEncryption(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream)
  Time elapsed: 0.961 sec  <<< ERROR!
java.io.IOException: regular upload failed: java.lang.IllegalArgumentException: 
content length is negative
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:205)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:456)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:368)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:159)
at 
org.apache.hadoop.fs.s3a.AbstractS3ATestBase.writeThenReadFile(AbstractS3ATestBase.java:135)
at 
org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.validateEncryptionForFilesize(AbstractTestS3AEncryption.java:79)
at 
org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.testEncryption(AbstractTestS3AEncryption.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.IllegalArgumentException: content length is negative
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2252)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1354)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:392)
at org.apache.hadoop.fs.s3a.AwsCall.execute(AwsCall.java:43)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:390)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:439)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:432)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> s3guard will set file length to -1 on a putObjectDirect(stream, -1) call
> 
>
> Key: HADOOP-14423
> URL: https://issues.apache.org/jira/browse/HADOOP-14423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>
> You can pass a negative number into {{S3AFileSystem.putObjectDirect}}, which 
> means "put until the end of the stream". 

[jira] [Created] (HADOOP-14423) s3guard will set file length to -1 on a putObjectDirect(stream, -1) call

2017-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14423:
---

 Summary: s3guard will set file length to -1 on a 
putObjectDirect(stream, -1) call
 Key: HADOOP-14423
 URL: https://issues.apache.org/jira/browse/HADOOP-14423
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0-alpha3
Reporter: Steve Loughran


You can pass a negative number into {{S3AFileSystem.putObjectDirect}}, which 
means "put until the end of the stream". S3guard has been using this {{len}} 
argument: it needs to be using the actual number of bytes uploaded. Also 
relevant with client side encryption, when the amount of data put > the amount 
of data in the file or stream.

Noted in the committer branch after I added some more assertions, I've changed 
it there so making changes to S3AFS.putObjectDirect to pull the content length 
to pass in to finishedWrite() from the {{PutObjectResult}} instead. This can be 
picked into the s3guard branch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010164#comment-16010164
 ] 

Brahma Reddy Battula commented on HADOOP-14422:
---

[~Hongyuan Li] Added you as contributor and assigned to you.. Now on wards you 
can assign yourself.

Refer following for How to contribute..
https://wiki.apache.org/hadoop/HowToContribute

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-14422:
-

Assignee: Hongyuan Li

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14422:
-
Description: 
ftp username: test passwd:1@123
hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test

17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, 
sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 'null'
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)


  was:
hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test

17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, 
sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 'null'
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>
> ftp username: test passwd:1@123
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   

[jira] [Comment Edited] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010125#comment-16010125
 ] 

Hongyuan Li edited comment on HADOOP-14422 at 5/15/17 7:59 AM:
---

[~owen.omalley]  please assign this to me? i will solve this.  


was (Author: hongyuan li):
[~Owen O'Malley] please assign this to me? i will solve this.  

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010125#comment-16010125
 ] 

Hongyuan Li commented on HADOOP-14422:
--

[~Owen O'Malley] please assign this to me? i will solve this.  

> FTPFileSystem and distcp does not work when ecnountered a complex ftp 
> password with char ':' or '@'
> ---
>
> Key: HADOOP-14422
> URL: https://issues.apache.org/jira/browse/HADOOP-14422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>
> hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test
> 17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 
> 'null'
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
>   at 
> org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14422) FTPFileSystem and distcp does not work when ecnountered a complex ftp password with char ':' or '@'

2017-05-15 Thread Hongyuan Li (JIRA)
Hongyuan Li created HADOOP-14422:


 Summary: FTPFileSystem and distcp does not work when ecnountered a 
complex ftp password with char ':' or '@'
 Key: HADOOP-14422
 URL: https://issues.apache.org/jira/browse/HADOOP-14422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, tools/distcp
Affects Versions: 3.0.0-alpha2
Reporter: Hongyuan Li


hadoop distcp ftp://test:1@123@node2/home/test/  hdfs://piaopiao/test

17/05/15 15:24:56 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, 
sourcePaths=[ftp://test:1@123@node2/home/test], targetPath=/test, 
targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
17/05/15 15:24:57 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
17/05/15 15:24:59 ERROR tools.DistCp: Exception encountered 
java.io.IOException: Login failed on server - 0.0.0.0, port - 21 as user 'null'
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:154)
at 
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:68)
at org.apache.hadoop.fs.Globber.glob(Globber.java:263)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1630)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:389)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:187)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:453)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed

2017-05-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14421:
---

 Summary: TestAdlFileSystemContractLive#testListStatus assertion 
failed
 Key: HADOOP-14421
 URL: https://issues.apache.org/jira/browse/HADOOP-14421
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: Atul Sikaria


TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273
 expected:<1> but was:<11>
{noformat}
Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)  
Time elapsed: 0.518 sec  <<< FAILURE!
junit.framework.AssertionFailedError: expected:<1> but was:<11>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.Assert.assertEquals(Assert.java:241)
at junit.framework.TestCase.assertEquals(TestCase.java:409)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60)
{noformat}

This is the first time we saw the issue. The test store {{rwj2dm}} was created 
on the fly and destroyed after the test.

The code base does not have HADOOP-14230 which cleans up the test dir better. 
Trying to determine whether this might help.  




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org