[jira] [Comment Edited] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354447#comment-15354447
 ] 

Weiwei Yang edited comment on HDFS-10588 at 6/29/16 5:44 AM:
-

Here is a patch to eliminate this error message. Let me know if this looks 
good. Thanks.
I did not include any test because this patch only remove an unintentional 
logging error, no functional change.


was (Author: cheersyang):
Here is a patch to eliminate this error message. Let me know if this looks 
good. Thanks.

> False alarm in namenode log - ERROR - Disk Balancer is not enabled
> --
>
> Key: HDFS-10588
> URL: https://issues.apache.org/jira/browse/HDFS-10588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10588.001.patch
>
>
> Noticed error message in namenode log 
> {code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
> (DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
> enabled.
> {code}
> even with default configuration dfs.disk.balancer.enabled=false.This is 
> triggered when accessing datanode web UI, because 
> {{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354579#comment-15354579
 ] 

Hadoop QA commented on HDFS-10588:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814569/HDFS-10588.001.patch |
| JIRA Issue | HDFS-10588 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc94196a9cea 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1faaa69 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15942/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15942/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354557#comment-15354557
 ] 

Hadoop QA commented on HDFS-6962:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} root: The patch generated 0 new + 1130 unchanged - 3 
fixed = 1130 total (was 1133) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814567/HDFS-6962.004.patch |
| JIRA Issue | HDFS-6962 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 

[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354497#comment-15354497
 ] 

Hadoop QA commented on HDFS-10210:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10210 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795305/HDFS-10210.002.patch |
| JIRA Issue | HDFS-10210 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15945/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354490#comment-15354490
 ] 

Hudson commented on HDFS-9852:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10030 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10030/])
HDFS-9852. hdfs dfs -setfacl error message is misleading (Wei-Chiu (aw: rev 
b3649adf6a1b2dc47566b4b0d652bd4e0a6a8056)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java


> hdfs dfs -setfacl error message is misleading
> -
>
> Key: HDFS-9852
> URL: https://issues.apache.org/jira/browse/HDFS-9852
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-9852.001.patch, HDFS-9852.002.patch
>
>
> When I type
> {noformat}hdfs dfs -setfacl -m default:user::rwx{noformat}
> It prints error message:
> {noformat}
> -setfacl:  is missing
> Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x } 
> ]|[--set  ]
> {noformat}
> But actually, it's the path that I missed. A correct command should be
> {noformat}
> hdfs dfs -setfacl -m default:user::rwx /data
> {noformat}
> In fact,
> {noformat}-setfacl -x | -m | --set{noformat} expects two parameters.
> We should print error message like this if it misses one:
> {noformat}
> -setfacl: Missing either  or 
> {noformat}
> and print the following if it misses two:
> {noformat}
> -setfacl: Missing arguments:  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4311) repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4311:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing as won't fix. httpfs was removed from trunk.

> repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
> ---
>
> Key: HDFS-4311
> URL: https://issues.apache.org/jira/browse/HDFS-4311
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha, 3.0.0-alpha1
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4311--N1.patch, HDFS-4311--N2.patch, 
> HDFS-4311--N3.patch, HDFS-4311--N4.patch, HDFS-4311--N5.patch, HDFS-4311.patch
>
>
> Some of the test cases in this test class are failing because they are 
> affected by static state changed by the previous test cases. Namely this is 
> the static field org.apache.hadoop.security.UserGroupInformation.loginUser .
> The suggested patch solves this problem.
> Besides, the following improvements are done:
> 1) parametrized the user principal and keytab values via system properties;
> 2) shutdown of the Jetty server and the minicluster between the test cases is 
> added to make the test methods independent on each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4439) umask-mode does not support 4-digit umask value

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4439:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> umask-mode does not support 4-digit umask value
> ---
>
> Key: HDFS-4439
> URL: https://issues.apache.org/jira/browse/HDFS-4439
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Andy Isaacson
>Assignee: Chu Tong
> Attachments: HDFS-4439.patch
>
>
> Best practice for specifying file permissions using the legacy octal format 
> is to always add a leading "0" to ensure the value is treated as octal.  
> However the {{fs.permissions.umask-mode}} parsing code throws an error when 
> given a 4-digit string:
> {code}
> $ hdfs dfs -Dfs.permissions.umask-mode= -touchz foo.txt
> 2013-01-24 12:49:02,352 WARN  permission.FsPermission 
> (FsPermission.java:getUMask(245)) - Unable to parse configuration 
> fs.permissions.umask-mode with value  as octal or symbolic umask.
> -touchz: Unable to parse configuration fs.permissions.umask-mode with value 
>  as octal or symbolic umask.
> Usage: hadoop fs [generic options] -touchz  ...
> {code}
> There's no downside to supporting {{}}, so hdfs should handle it 
> gracefully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4439) umask-mode does not support 4-digit umask value

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4439:
---
Labels:   (was: BB2015-05-TBR)

> umask-mode does not support 4-digit umask value
> ---
>
> Key: HDFS-4439
> URL: https://issues.apache.org/jira/browse/HDFS-4439
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Andy Isaacson
>Assignee: Chu Tong
> Attachments: HDFS-4439.patch
>
>
> Best practice for specifying file permissions using the legacy octal format 
> is to always add a leading "0" to ensure the value is treated as octal.  
> However the {{fs.permissions.umask-mode}} parsing code throws an error when 
> given a 4-digit string:
> {code}
> $ hdfs dfs -Dfs.permissions.umask-mode= -touchz foo.txt
> 2013-01-24 12:49:02,352 WARN  permission.FsPermission 
> (FsPermission.java:getUMask(245)) - Unable to parse configuration 
> fs.permissions.umask-mode with value  as octal or symbolic umask.
> -touchz: Unable to parse configuration fs.permissions.umask-mode with value 
>  as octal or symbolic umask.
> Usage: hadoop fs [generic options] -touchz  ...
> {code}
> There's no downside to supporting {{}}, so hdfs should handle it 
> gracefully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9852) hdfs dfs -setfacl error message is misleading

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-9852:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

> hdfs dfs -setfacl error message is misleading
> -
>
> Key: HDFS-9852
> URL: https://issues.apache.org/jira/browse/HDFS-9852
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-9852.001.patch, HDFS-9852.002.patch
>
>
> When I type
> {noformat}hdfs dfs -setfacl -m default:user::rwx{noformat}
> It prints error message:
> {noformat}
> -setfacl:  is missing
> Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x } 
> ]|[--set  ]
> {noformat}
> But actually, it's the path that I missed. A correct command should be
> {noformat}
> hdfs dfs -setfacl -m default:user::rwx /data
> {noformat}
> In fact,
> {noformat}-setfacl -x | -m | --set{noformat} expects two parameters.
> We should print error message like this if it misses one:
> {noformat}
> -setfacl: Missing either  or 
> {noformat}
> and print the following if it misses two:
> {noformat}
> -setfacl: Missing arguments:  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10588:
---
Status: Patch Available  (was: Open)

> False alarm in namenode log - ERROR - Disk Balancer is not enabled
> --
>
> Key: HDFS-10588
> URL: https://issues.apache.org/jira/browse/HDFS-10588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10588.001.patch
>
>
> Noticed error message in namenode log 
> {code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
> (DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
> enabled.
> {code}
> even with default configuration dfs.disk.balancer.enabled=false.This is 
> triggered when accessing datanode web UI, because 
> {{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-10588:
--

Assignee: Weiwei Yang

> False alarm in namenode log - ERROR - Disk Balancer is not enabled
> --
>
> Key: HDFS-10588
> URL: https://issues.apache.org/jira/browse/HDFS-10588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10588.001.patch
>
>
> Noticed error message in namenode log 
> {code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
> (DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
> enabled.
> {code}
> even with default configuration dfs.disk.balancer.enabled=false.This is 
> triggered when accessing datanode web UI, because 
> {{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10548) Remove the long deprecated BlockReaderRemote

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354471#comment-15354471
 ] 

Hadoop QA commented on HDFS-10548:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
71 unchanged - 30 fixed = 75 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814360/HDFS-10548-v2.patch |
| JIRA Issue | HDFS-10548 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0f83f938f654 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77031a9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15938/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15938/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt

[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354466#comment-15354466
 ] 

Hadoop QA commented on HDFS-10530:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 388 unchanged - 2 fixed = 390 total (was 390) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814461/HDFS-10530.2.patch |
| JIRA Issue | HDFS-10530 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5275a422366e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77031a9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15939/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15939/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15939/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15939/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Updated] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10588:
---
Attachment: HDFS-10588.001.patch

Here is a patch to eliminate this error message. Let me know if this looks 
good. Thanks.

> False alarm in namenode log - ERROR - Disk Balancer is not enabled
> --
>
> Key: HDFS-10588
> URL: https://issues.apache.org/jira/browse/HDFS-10588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Weiwei Yang
> Attachments: HDFS-10588.001.patch
>
>
> Noticed error message in namenode log 
> {code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
> (DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
> enabled.
> {code}
> even with default configuration dfs.disk.balancer.enabled=false.This is 
> triggered when accessing datanode web UI, because 
> {{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10588:
--

 Summary: False alarm in namenode log - ERROR - Disk Balancer is 
not enabled
 Key: HDFS-10588
 URL: https://issues.apache.org/jira/browse/HDFS-10588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Reporter: Weiwei Yang


Noticed error message in namenode log 
{code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
(DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
enabled.
{code}
even with default configuration dfs.disk.balancer.enabled=false.This is 
triggered when accessing datanode web UI, because 
{{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Hadoop Flags:   (was: Incompatible change)
Target Version/s: 2.8.0  (was: )
  Status: Patch Available  (was: In Progress)

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Attachment: HDFS-6962.004.patch

Patch 004 (passed system and compatibility tests):
* Create a new class {{FsCreateModes}} that extends {{FsPermission}} to store 
both masked and unmasked create modes. HDFS client uses it to sneak unmasked 
permission from FileContext/FileSystem -> AFS -> Hdfs -> DFSClient -> RPC. NN 
uses it to sneak unmasked permission from RPC -> NameNodeRpcServer.create 
(placed into PermissionStatus) -> FSNamesystem.startFile -> 
FSDirWriterFileOp.startFile/addFile -> INodeFile -> INodeWithAdditionalFields.
* Add field {{unmasked}} to protobuf message {{CreateRequestProto}} and 
{{MkdirsRequestProto}}
* Modify {{copyINodeDefaultAcl}} to switch between old and new ACL inheritance 
behavior.
* Add 2 unit tests to {{FSAclBaseTest}}

Questions:
* {{PermissionStatus#applyUMask}} never used, remove it?
* {{DFSClient#mkdirs}} and DFSClient#primitiveMkdir}} use file default if 
permission is null. Should use dir default permission?

TODO:
* Create a separate jira to to support WebHDFSAcl and NFS if necessary
* More updates to {{HdfsPermissionsGuide.md}}

[~cnauroth] and [~atm], please review patch 004.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10569:
---
Priority: Major  (was: Minor)

> A bug causes OutOfIndex error in BlockListAsLongs
> -
>
> Key: HDFS-10569
> URL: https://issues.apache.org/jira/browse/HDFS-10569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch
>
>
> An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* 
> is +2 to the size of *values*, but the for-loop accesses *values* using 
> *longs* index. This will cause OutOfIndex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354400#comment-15354400
 ] 

John Zhuge edited comment on HDFS-6962 at 6/29/16 2:56 AM:
---

Patch 004 passed system and compatibility tests on a pseudo cluster. Upload 
test logs. Focus on group WRITE permission. The test script is {{run}}, the 
command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)


was (Author: jzhuge):
Patch 004 passed system and compatibility tests on a pseudo cluster. Upload 
test logs. Focus on group permission, especially WRITE. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx

[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354430#comment-15354430
 ] 

John Zhuge commented on HDFS-6962:
--

Patch 004 passed the following ACL related unit tests including the new 
{{FSAclBaseTest#testUMaskDefaultAclNewFile}} and 
{{FSAclBaseTest#testUMaskDefaultAclNewDir}}: TestAcl, TestAclCommands, 
TestAclCLI, TestViewFileSystemWithAcls, TestViewFsWithAcls, 
TestAclWithSnapshot, TestAclConfigFlag, TestAclTransformation, 
TestFileContextAcl, TestFSImageWithAcl, TestNameNodeAcl, TestAclsEndToEnd, 
TestOfflineImageViewerForAcl, and TestWebHDFSAcl.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354400#comment-15354400
 ] 

John Zhuge edited comment on HDFS-6962 at 6/29/16 2:48 AM:
---

Patch 004 passed system and compatibility tests on a pseudo cluster. Upload 
test logs. Focus on group permission, especially WRITE. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)


was (Author: jzhuge):
Pass system and compatibility tests with Patch 004 on a pseudo cluster. Upload 
test logs. Focus on group permission, especially WRITE. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: 

[jira] [Comment Edited] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354400#comment-15354400
 ] 

John Zhuge edited comment on HDFS-6962 at 6/29/16 2:40 AM:
---

Pass system and compatibility tests with Patch 004 on a pseudo cluster. Upload 
test logs. Focus on group permission, especially WRITE. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)


was (Author: jzhuge):
Upload system and compatibility test logs for Patch 004 on a pseudo cluster. 
Pay attention to group permission, especially WRITE bit. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: 

[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Attachment: run
enabled_old_client.log
enabled_new_client.log
disabled_old_client.log
disabled_new_client.log

Upload system and compatibility test logs for Patch 004 on a pseudo cluster. 
Pay attention to group permission, especially WRITE bit. The test script is 
{{run}}, the command line is
{code}
test_tag=disabled_new_client ; script ~/test/$test_tag.log ~/test/run $test_tag
{code}

Test server is running {{3.0.0-alpha1 + Patch 004}} with POSIX ACL Inheritance 
either enabled or disabled. There are 2 versions of test clients. The POSIX ACL 
inheritance is only in effect when the flag is true and the requests come from 
a compatible client. The following table shows the matrix for the log files.

|| Client Version\POSIX ACL Inheritance || Enabled || Disabled ||
| 2.6.0 | enabled_old_client.log | disabled_old_client.log |
| 3.0.0-alpha1 + Patch 004 | enabled_new_client.log | disabled_new_client.log |

In each log file, there is a test matrix of the following dimensions:
* Parent dir has default ACL or not
* Umask 002 or 027
* Create a file (create request) or a sub-directory (mkdirs request)

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354218#comment-15354218
 ] 

Weiwei Yang commented on HDFS-10583:


Uploaded two screen shots when I tested on latest trunk. Please see 
[^conf_link_on_NN.jpg] and [^conf_link_on_DN.jpg]

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-10583.001.patch, conf_link_on_DN.jpg, 
> conf_link_on_NN.jpg
>
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10583:
---
Attachment: conf_link_on_DN.jpg
conf_link_on_NN.jpg

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-10583.001.patch, conf_link_on_DN.jpg, 
> conf_link_on_NN.jpg
>
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10512) VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks

2016-06-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354213#comment-15354213
 ] 

Yiqun Lin commented on HDFS-10512:
--

>From the code in {{FsDatasetImpl}}, I see that the method 
>{{FsDatasetImpl#getVolume}} returns null cause the NPE. In these code:
{code}
  @Override
  public synchronized FsVolumeImpl getVolume(final ExtendedBlock b) {
final ReplicaInfo r =  volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
return r != null? (FsVolumeImpl)r.getVolume(): null;
  }
{code}
So it means that the Replicainfo of corrupt block in volumeMap has been 
removed. And there are many cases will trigger the operation 
{{volumeMap.remove}} in {{FsDatasetImpl}}. So I want to say, the case that 
mentioned in HDFS-10587 will lead this, can you confirm this, [~jojochuang]?


> VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks
> --
>
> Key: HDFS-10512
> URL: https://issues.apache.org/jira/browse/HDFS-10512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10512.001.patch, HDFS-10512.002.patch
>
>
> VolumeScanner may terminate due to unexpected NullPointerException thrown in 
> {{DataNode.reportBadBlocks()}}. This is different from HDFS-8850/HDFS-9190
> I observed this bug in a production CDH 5.5.1 cluster and the same bug still 
> persist in upstream trunk.
> {noformat}
> 2016-04-07 20:30:53,830 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-1800173197-10.204.68.5-125156296:blk_1170134484_96468685 on /dfs/dn
> 2016-04-07 20:30:53,831 ERROR 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting because of exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:1018)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner$ScanResultHandler.handle(VolumeScanner.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:443)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:547)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:621)
> 2016-04-07 20:30:53,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting.
> {noformat}
> I think the NPE comes from the volume variable in the following code snippet. 
> Somehow the volume scanner know the volume, but the datanode can not lookup 
> the volume using the block.
> {code}
> public void reportBadBlocks(ExtendedBlock block) throws IOException{
> BPOfferService bpos = getBPOSForBlock(block);
> FsVolumeSpi volume = getFSDataset().getVolume(block);
> bpos.reportBadBlocks(
> block, volume.getStorageID(), volume.getStorageType());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353152#comment-15353152
 ] 

Weiwei Yang edited comment on HDFS-10583 at 6/29/16 1:59 AM:
-

Thanks [~rushabh.shah] I just revised the title, hope it is clearer now. Feel 
free to modify it.


was (Author: cheersyang):
Thanks [~rushabh.shah] I just revised the title, hope it is clearer now. Feel 
free the modify it.

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-10583.001.patch
>
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10583:
---
Status: Patch Available  (was: Open)

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-10583.001.patch
>
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10583:
---
Attachment: HDFS-10583.001.patch

The patch is trivial, it's going to add the link in namenode and datanode UI. I 
also checked journal datanode and secondary namenode, I did not see it is 
necessary to add this for them. Uploaded v1 patch for review. 

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-10583.001.patch
>
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10583:
---
Priority: Minor  (was: Major)

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-28 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354199#comment-15354199
 ] 

GAO Rui commented on HDFS-10530:


Patch 2 had been attached to address TestBalancer failures.
{{hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer}} seems not 
related to block placement changes, and the other tests should could pass this 
time.

> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch
>
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-28 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-10530:
---
Status: In Progress  (was: Patch Available)

> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch
>
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-28 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-10530:
---
Status: Patch Available  (was: In Progress)

> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch
>
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-28 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-10530:
---
Attachment: HDFS-10530.2.patch

> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-10530.1.patch, HDFS-10530.2.patch
>
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10548) Remove the long deprecated BlockReaderRemote

2016-06-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354084#comment-15354084
 ] 

Kai Zheng commented on HDFS-10548:
--

Updated the patch doing the rename and also cleaning up the related tests.

> Remove the long deprecated BlockReaderRemote
> 
>
> Key: HDFS-10548
> URL: https://issues.apache.org/jira/browse/HDFS-10548
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-10548-v1.patch, HDFS-10548-v2.patch
>
>
> To lessen the maintain burden like raised in HDFS-8901, suggest we remove 
> {{BlockReaderRemote}} class that's deprecated very long time ago. 
> From {{BlockReaderRemote}} header:
> {quote}
>  * @deprecated this is an old implementation that is being left around
>  * in case any issues spring up with the new {@link BlockReaderRemote2}
>  * implementation.
>  * It will be removed in the next release.
> {quote}
> From {{BlockReaderRemote2}} class header:
> {quote}
>  * This is a new implementation introduced in Hadoop 0.23 which
>  * is more efficient and simpler than the older BlockReader
>  * implementation. It should be renamed to BlockReaderRemote
>  * once we are confident in it.
> {quote}
> So even further, after getting rid of the old class, we could rename as the 
> comment suggested: BlockReaderRemote2 => BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10548) Remove the long deprecated BlockReaderRemote

2016-06-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-10548:
-
Attachment: HDFS-10548-v2.patch

> Remove the long deprecated BlockReaderRemote
> 
>
> Key: HDFS-10548
> URL: https://issues.apache.org/jira/browse/HDFS-10548
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-10548-v1.patch, HDFS-10548-v2.patch
>
>
> To lessen the maintain burden like raised in HDFS-8901, suggest we remove 
> {{BlockReaderRemote}} class that's deprecated very long time ago. 
> From {{BlockReaderRemote}} header:
> {quote}
>  * @deprecated this is an old implementation that is being left around
>  * in case any issues spring up with the new {@link BlockReaderRemote2}
>  * implementation.
>  * It will be removed in the next release.
> {quote}
> From {{BlockReaderRemote2}} class header:
> {quote}
>  * This is a new implementation introduced in Hadoop 0.23 which
>  * is more efficient and simpler than the older BlockReader
>  * implementation. It should be renamed to BlockReaderRemote
>  * once we are confident in it.
> {quote}
> So even further, after getting rid of the old class, we could rename as the 
> comment suggested: BlockReaderRemote2 => BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10548) Remove the long deprecated BlockReaderRemote

2016-06-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354000#comment-15354000
 ] 

Kai Zheng commented on HDFS-10548:
--

Sure, Andrew! I will do that shortly.

> Remove the long deprecated BlockReaderRemote
> 
>
> Key: HDFS-10548
> URL: https://issues.apache.org/jira/browse/HDFS-10548
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-10548-v1.patch
>
>
> To lessen the maintain burden like raised in HDFS-8901, suggest we remove 
> {{BlockReaderRemote}} class that's deprecated very long time ago. 
> From {{BlockReaderRemote}} header:
> {quote}
>  * @deprecated this is an old implementation that is being left around
>  * in case any issues spring up with the new {@link BlockReaderRemote2}
>  * implementation.
>  * It will be removed in the next release.
> {quote}
> From {{BlockReaderRemote2}} class header:
> {quote}
>  * This is a new implementation introduced in Hadoop 0.23 which
>  * is more efficient and simpler than the older BlockReader
>  * implementation. It should be renamed to BlockReaderRemote
>  * once we are confident in it.
> {quote}
> So even further, after getting rid of the old class, we could rename as the 
> comment suggested: BlockReaderRemote2 => BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage histogram

2016-06-28 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10534:
--
Description: 
In addition of *Min/Median/Max*, another meaningful metric for cluster balance 
is DN usage in histogram style.
Since NN already has provided necessary information to calculate histogram of 
DN usage, it can be done in JS side.

  was:
In addition of *Min/Median/Max*, another meaningful metric for cluster balance 
is DN usage rate at a certain percentile (e.g. 90 or 95). We should add a 
config option, and another filed on NN WebUI, to display this.

Current implementation 


> NameNode WebUI should display DataNode usage histogram
> --
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage in histogram style.
> Since NN already has provided necessary information to calculate histogram of 
> DN usage, it can be done in JS side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage histogram

2016-06-28 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10534:
--
Description: 
In addition of *Min/Median/Max*, another meaningful metric for cluster balance 
is DN usage rate at a certain percentile (e.g. 90 or 95). We should add a 
config option, and another filed on NN WebUI, to display this.

Current implementation 

  was:In addition of *Min/Median/Max*, another meaningful metric for cluster 
balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should add 
a config option, and another filed on NN WebUI, to display this.


> NameNode WebUI should display DataNode usage histogram
> --
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.
> Current implementation 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-28 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353977#comment-15353977
 ] 

Kai Sasaki commented on HDFS-10534:
---

[~andrew.wang] Thanks for comment.
I think it is possible to calculate the histogram because NN provides UI which 
shows the usage of each DN. So I'll try to update a patch to build histogram on 
JS side. And also I'll update the title of this JIRA to fit current motivation.

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage histogram

2016-06-28 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10534:
--
Summary: NameNode WebUI should display DataNode usage histogram  (was: 
NameNode WebUI should display DataNode usage rate with a certain percentile)

> NameNode WebUI should display DataNode usage histogram
> --
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10489:
---
Affects Version/s: 2.6.4

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10548) Remove the long deprecated BlockReaderRemote

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353935#comment-15353935
 ] 

Andrew Wang commented on HDFS-10548:


Hi Kai, do you want to also rename BlockReaderRemote2 to just BlockReaderRemote 
per the class javadoc in BRR2?

> Remove the long deprecated BlockReaderRemote
> 
>
> Key: HDFS-10548
> URL: https://issues.apache.org/jira/browse/HDFS-10548
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-10548-v1.patch
>
>
> To lessen the maintain burden like raised in HDFS-8901, suggest we remove 
> {{BlockReaderRemote}} class that's deprecated very long time ago. 
> From {{BlockReaderRemote}} header:
> {quote}
>  * @deprecated this is an old implementation that is being left around
>  * in case any issues spring up with the new {@link BlockReaderRemote2}
>  * implementation.
>  * It will be removed in the next release.
> {quote}
> From {{BlockReaderRemote2}} class header:
> {quote}
>  * This is a new implementation introduced in Hadoop 0.23 which
>  * is more efficient and simpler than the older BlockReader
>  * implementation. It should be renamed to BlockReaderRemote
>  * once we are confident in it.
> {quote}
> So even further, after getting rid of the old class, we could rename as the 
> comment suggested: BlockReaderRemote2 => BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10512) VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks

2016-06-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353913#comment-15353913
 ] 

Wei-Chiu Chuang commented on HDFS-10512:


HDFS-10587 was filed for the root cause of the bad block that I observed.

In addition, HDFS-6937 will be obsolete if we can get a good fix for this bug.

> VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks
> --
>
> Key: HDFS-10512
> URL: https://issues.apache.org/jira/browse/HDFS-10512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10512.001.patch, HDFS-10512.002.patch
>
>
> VolumeScanner may terminate due to unexpected NullPointerException thrown in 
> {{DataNode.reportBadBlocks()}}. This is different from HDFS-8850/HDFS-9190
> I observed this bug in a production CDH 5.5.1 cluster and the same bug still 
> persist in upstream trunk.
> {noformat}
> 2016-04-07 20:30:53,830 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-1800173197-10.204.68.5-125156296:blk_1170134484_96468685 on /dfs/dn
> 2016-04-07 20:30:53,831 ERROR 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting because of exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:1018)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner$ScanResultHandler.handle(VolumeScanner.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:443)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:547)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:621)
> 2016-04-07 20:30:53,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting.
> {noformat}
> I think the NPE comes from the volume variable in the following code snippet. 
> Somehow the volume scanner know the volume, but the datanode can not lookup 
> the volume using the block.
> {code}
> public void reportBadBlocks(ExtendedBlock block) throws IOException{
> BPOfferService bpos = getBPOSForBlock(block);
> FsVolumeSpi volume = getFSDataset().getVolume(block);
> bpos.reportBadBlocks(
> block, volume.getStorageID(), volume.getStorageType());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10587) Incorrect offset/length calculation in pipeline recovery causes block corruption

2016-06-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-10587:
--

 Summary: Incorrect offset/length calculation in pipeline recovery 
causes block corruption
 Key: HDFS-10587
 URL: https://issues.apache.org/jira/browse/HDFS-10587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


We found incorrect offset and length calculation in pipeline recovery may cause 
block corruption and results in missing blocks under a very unfortunate 
scenario. 

(1) A client established pipeline and started writing data to the pipeline.
(2) One of the data node in the pipeline restarted, closing the socket, and 
some written data were unacknowledged.
(3) Client replaced the failed data node with a new one, initiating block 
transfer to copy existing data in the block to the new datanode.
(4) The block is transferred to the new node. Crucially, the entire block, 
including the unacknowledged data, was transferred.
(5) The last chunk (512 bytes) was not a full chunk, but the destination still 
reserved the whole chunk in its buffer, and wrote the entire buffer to disk, 
therefore some written data is garbage.
(6) When the transfer was done, the destination data node converted the replica 
from temporary to rbw, which made its visible length as the length of bytes on 
disk. That is to say, it thought whatever was transferred was acknowledged. 
However, the visible length of the replica is different (round up to the next 
multiple of 512) than the source of transfer.
(7) Client then truncated the block in the attempt to remove unacknowledged 
data. However, because the visible length is equivalent of the bytes on disk, 
it did not truncate unacknowledged data.
(8) When new data was appended to the destination, it skipped the bytes already 
on disk. Therefore, whatever was written as garbage was not replaced.
(9) the volume scanner detected corrupt replica, but due to HDFS-10512, it 
wouldn’t tell NameNode to mark the replica as corrupt, so the client continued 
to form a pipeline using the corrupt replica.
(10) Finally the DN that had the only healthy replica was restarted. NameNode 
then update the pipeline to only contain the corrupt replica.
(11) Client continue to write to the corrupt replica, because neither client 
nor the data node itself knows the replica is corrupt. When the restarted 
datanodes comes back, their replica are stale, despite they are not corrupt. 
Therefore, none of the replica is good and up to date.

The sequence of events was reconstructed based on DataNode/NameNode log and my 
understanding of code.
Incidentally, we have observed the same sequence of events on two independent 
clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10586) Erasure Code misfunctions when 3 DataNode down

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10586:
---
Component/s: erasure-coding

> Erasure Code misfunctions when 3 DataNode down
> --
>
> Key: HDFS-10586
> URL: https://issues.apache.org/jira/browse/HDFS-10586
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
> Environment: 9 DataNode and 1 NameNode,Erasured code policy is 
> set as "6--3",   When 3 DataNode down,  erasured code fails and an exception 
> is thrown
>Reporter: gao shan
>
> The following is the steps to reproduce:
> 1) hadoop fs -mkdir /ec
> 2) set erasured code policy as "6-3"
> 3) "write" data by : 
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -write -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> 4) Manually down 3 nodes.  Kill the threads of  "datanode" and "nodemanager" 
> in 3 DataNode.
> 5) By using erasured code to "read" data by:
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -read -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> then the failure occurs and the exception is thrown as:
> INFO mapreduce.Job: Task Id : attempt_1465445965249_0008_m_34_2, Status : 
> FAILED
> Error: java.io.IOException: 4 missing blocks, the stripe is: Offset=0, 
> length=8388608, fetchedChunksNum=0, missingChunksNum=4
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.checkMissingBlocks(DFSStripedInputStream.java:614)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readParityChunks(DFSStripedInputStream.java:647)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:762)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:316)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:450)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:531)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:508)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDFS-10586) Erasure Code misfunctions when 3 DataNode down

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HADOOP-13292 to HDFS-10586:
-

Affects Version/s: (was: 3.0.0-alpha1)
   3.0.0-alpha1
  Key: HDFS-10586  (was: HADOOP-13292)
  Project: Hadoop HDFS  (was: Hadoop Common)

> Erasure Code misfunctions when 3 DataNode down
> --
>
> Key: HDFS-10586
> URL: https://issues.apache.org/jira/browse/HDFS-10586
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
> Environment: 9 DataNode and 1 NameNode,Erasured code policy is 
> set as "6--3",   When 3 DataNode down,  erasured code fails and an exception 
> is thrown
>Reporter: gao shan
>
> The following is the steps to reproduce:
> 1) hadoop fs -mkdir /ec
> 2) set erasured code policy as "6-3"
> 3) "write" data by : 
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -write -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> 4) Manually down 3 nodes.  Kill the threads of  "datanode" and "nodemanager" 
> in 3 DataNode.
> 5) By using erasured code to "read" data by:
> time hadoop jar 
> /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
>   TestDFSIO -D test.build.data=/ec -read -nrFiles 30 -fileSize 12288 
> -bufferSize 1073741824
> then the failure occurs and the exception is thrown as:
> INFO mapreduce.Job: Task Id : attempt_1465445965249_0008_m_34_2, Status : 
> FAILED
> Error: java.io.IOException: 4 missing blocks, the stripe is: Offset=0, 
> length=8388608, fetchedChunksNum=0, missingChunksNum=4
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.checkMissingBlocks(DFSStripedInputStream.java:614)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readParityChunks(DFSStripedInputStream.java:647)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:762)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:316)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:450)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:531)
>   at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:508)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
>   at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10564) UNDER MIN REPL'D BLOCKS should be prioritized for replication

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353819#comment-15353819
 ] 

Andrew Wang commented on HDFS-10564:


Hi Elliot, I'm guessing "draining" here means decommissioning? I agree with 
your assessment about prioritization.

> UNDER MIN REPL'D BLOCKS should be prioritized for replication
> -
>
> Key: HDFS-10564
> URL: https://issues.apache.org/jira/browse/HDFS-10564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Elliott Clark
>
> When datanodes get drained they are probably being drained because the 
> hardware is bad, or suspect. The blocks that have no live nodes should be 
> prioritized. However it appears not to be the case at all.
> Draining full nodes with lots of blocks but only a handful of under min 
> replicated blocks takes about the full time before fsck reports clean again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10409) libhdfs++: Something is holding connection_state_lock in RpcConnectionImpl destructor

2016-06-28 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353777#comment-15353777
 ] 

Xiaowei Zhu commented on HDFS-10409:


+1 on not holding lock in dtor for this case.

> libhdfs++: Something is holding connection_state_lock in RpcConnectionImpl 
> destructor
> -
>
> Key: HDFS-10409
> URL: https://issues.apache.org/jira/browse/HDFS-10409
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10409.HDFS-8707.000.patch, locked_dtor.patch
>
>
> The destructor to RpcConnectionImpl grabs a lock using a std::lock_guard<>.  
> It turns out something is already holding the lock when this happens.  Best 
> bet is something that looks like:
> {code}
> void SomeFunctionThatShouldntTakeLock(){
>   std::lock_guard bad(connection_state_lock_)
>   conn_.reset(); //conn is a shared_ptr to RpcConnectionImpl
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-06-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353764#comment-15353764
 ] 

Wei-Chiu Chuang commented on HDFS-6937:
---

So, I think this patch is not valid.

If there is indeed a checksum error at the middle (second) node of pipeline, 
the tail node will detect it, sending ERROR_CHECKSUM code back to client and 
terminate the connection. This should effectively remove the middle node in the 
pipeline.

If for some reason the error code is not sent before the connection was 
terminated, the pipeline recovery will be initiated, transferring the second 
node's replica to the downstream. Because the replica is corrupt, the 
destination will terminate the connection. When this happens, the second node 
will initiate VolumeScanner to check the integrity of local replica, and it 
should tell NameNode that the replica is bad, and NameNode should have exclude 
the datanode when client asks to update pipeline. The bug in HDFS-10512 is why 
it doesn't tell NameNode.

To sum up, I think this patch is redundant if we can find a better fix for 
HDFS-10512.

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch, 
> HDFS-6937.003.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6434) Default permission for creating file should be 644 for WebHdfs/HttpFS

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353733#comment-15353733
 ] 

Hudson commented on HDFS-6434:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10028 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10028/])
HDFS-6434. Default permission for creating file should be 644 for (wang: rev 
c0829f449337b78ac0b995e216f7324843e74dd2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web/resources/TestWebHdfsCreatePermissions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PermissionParam.java


> Default permission for creating file should be 644 for WebHdfs/HttpFS
> -
>
> Key: HDFS-6434
> URL: https://issues.apache.org/jira/browse/HDFS-6434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Juan Yu
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-6434.002.patch, HDFS-6434.patch
>
>
> Creating a file by using WebHdfs without specify permission. file is created 
> with permission 755. it should be 644.
> WebHdfs seems using the same default permission for both file and directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10488) WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating files/directories without specifying permissions

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353717#comment-15353717
 ] 

Andrew Wang commented on HDFS-10488:


The current 005 patch looks okay for branch-2, but since we just committed 
HDFS-6434, does this patch need to be updated for trunk? It mentions a file 
having default 755 permissions, but HDFS-6434 makes the default 644. The other 
note is that there's a "Permission" section in this doc that mentions 755 as 
the default for both files and dirs, this should also be updated.

Would also appreciate if you could update the JIRA summary to reflect the 
contents of the patch. Thanks Wellington.

> WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating 
> files/directories without specifying permissions
> --
>
> Key: HDFS-10488
> URL: https://issues.apache.org/jira/browse/HDFS-10488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-10488.002.patch, HDFS-10488.003.patch, 
> HDFS-10488.005.patch, HDFS-10488.patch
>
>
> WebHDFS methods for creating file/directories are always creating it with 755 
> permissions as default, even ignoring any configured 
> *fs.permissions.umask-mode* in the case of directories.
> Dfs CLI, however, applies the configured umask to 777 permission for 
> directories, or 666 permission for files.
> Example below shows the different behaviour when creating directory via CLI 
> and WebHDFS:
> {noformat}
> 1) Creating a directory under '/test/' as 'test-user'. Configured 
> fs.permissions.umask-mode is 000: 
> $ sudo -u test-user hdfs dfs -mkdir /test/test-user1 
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user1 
> # file: /test/test-user1
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::rwx 
> other::rwx 
> 4) Doing the same via WebHDFS does not get the proper ACLs: 
> $ curl -i -X PUT 
> "http://namenode-host:50070/webhdfs/v1/test/test-user2?user.name=test-user=MKDIRS;
>  
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user2 
> # file: /test/test-user2 
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::r-x 
> other::r-x
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6434) Default permission for creating file should be 644 for WebHdfs/HttpFS

2016-06-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6434:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
 Release Note: The default permissions of files and directories created via 
WebHDFS and HttpFS are now 644 and 755 respectively. See HDFS-10488 for related 
discussion.
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks for the contribution Wellington!

> Default permission for creating file should be 644 for WebHdfs/HttpFS
> -
>
> Key: HDFS-6434
> URL: https://issues.apache.org/jira/browse/HDFS-6434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Juan Yu
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-6434.002.patch, HDFS-6434.patch
>
>
> Creating a file by using WebHdfs without specify permission. file is created 
> with permission 755. it should be 644.
> WebHdfs seems using the same default permission for both file and directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6434) Default permission for creating file should be 644 for WebHdfs/HttpFS

2016-06-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353704#comment-15353704
 ] 

Andrew Wang commented on HDFS-6434:
---

Sorry for the delay, been travelling recently. Thanks for the pointer to 
HDFS-10488, that reasoning makes sense to me. I'm +1, will commit shortly.

> Default permission for creating file should be 644 for WebHdfs/HttpFS
> -
>
> Key: HDFS-6434
> URL: https://issues.apache.org/jira/browse/HDFS-6434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Juan Yu
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-6434.002.patch, HDFS-6434.patch
>
>
> Creating a file by using WebHdfs without specify permission. file is created 
> with permission 755. it should be 644.
> WebHdfs seems using the same default permission for both file and directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10554) libhdfs++: signed to unsigned conversions are breaking things and compiler isn't issuing expected warnings

2016-06-28 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353679#comment-15353679
 ] 

James Clampffer commented on HDFS-10554:


Turns out we've been -slightly- very careless about implicit conversions that 
are technically legal in c++.  Building with -Wsign-conversion shows that 
there's quite a bit of work to do here.  Once the existing issues are fixed I 
think we should build with -Wsign-conversion by default.

> libhdfs++: signed to unsigned conversions are breaking things and compiler 
> isn't issuing expected warnings
> --
>
> Key: HDFS-10554
> URL: https://issues.apache.org/jira/browse/HDFS-10554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>
> There's at least two places where we use -1 to indicate unset/default values 
> that end up getting cast into unsigned integers.  The compiler should be 
> smart enough to figure this out and issue a warning but it's not; we need to 
> find out what's going on there.  We also need to fix the places where this 
> sort of thing has found its way into the code:
> In URI
> {code}
>   // -1 if the port is undefined.
>   optional get_port() const
>   { return port; }
> {code}
> In Options (gets cast to uint64_t somewhere)
> {code}
> /**
>  * Maximum number of retries for RPC operations
>  **/
> int max_rpc_retries;
> static const int kNoRetry = -1;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10567) Improve plan command help message

2016-06-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353674#comment-15353674
 ] 

Lei (Eddy) Xu commented on HDFS-10567:
--

Hi, [~xiaobingo] Thanks for the patch. It looks good overall.

A few small nits:
* {{Path of file to write output to}}, can it be a local file or a hdfs file? 
* {{Maximum disk bandwidth in integer}}. -> "bandwidth (MB) in ..." ?
* {{...when disk data density is...}}. Is there some reference for this 
terminology? It is not clear in this help message along.

+1 pending the fixes.  Thanks !

> Improve plan command help message
> -
>
> Key: HDFS-10567
> URL: https://issues.apache.org/jira/browse/HDFS-10567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10567-HDFS-1312.000.patch
>
>
> {code}
> --bandwidth  Maximum disk bandwidth to be consumed by
>   diskBalancer. e.g. 10
> --maxerror   Describes how many errors can be
>   tolerated while copying between a pair
>   of disks.
> --outFile to write output to, if not
>   specified defaults will be used.
> --plan   creates a plan for datanode.
> --thresholdPercentagePercentage skew that wetolerate before
>   diskbalancer starts working e.g. 10
> --v   Print out the summary of the plan on
>   console
> {code}
> We should 
> * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or 
> float / double number? Not clear in CLI message.
> * Give more details about {{--plan}}. It is not clear what the {{}} is 
> for.
> * {{--thresholdPercentage}},  has typo {{wetolerate}} in the error message. 
> Also it needs to indicated that it is the difference between space 
> utilization between two disks / volumes.  Is it an integer or float / double 
> number?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9326) Create a generic function to synchronize async functions and methods.

2016-06-28 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer resolved HDFS-9326.
---
   Resolution: Won't Fix
 Assignee: James Clampffer
Fix Version/s: HDFS-8707

Closing this bug without a fix.  It's not required and there doesn't seem to be 
a good solution that works on compilers more than a year or so old.

> Create a generic function to synchronize async functions and methods. 
> --
>
> Key: HDFS-9326
> URL: https://issues.apache.org/jira/browse/HDFS-9326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Fix For: HDFS-8707
>
>
> The majority of the functionality in libhdfs++ is asynchronous, but some 
> applications need synchronous operations.  At the time of filing this only 
> happens in 3 places in the C API, however that number is going to grow a lot 
> once the C and high level C++ APIs expose all of the namenode functions.
> This synchronization is typically implemented like this:
> auto promise = std::make_shared()
> std::future = future(promise->get_future());
> auto async_callback = [promise] () {promise->set_value(val);};
> SomeClass::AsyncMethod(async_callback); 
> auto result = future.get()
> Ideally this could all be pushed into a templated function so that the 
> promise and future don't need to be defined at the call site.  This would 
> probably take the form of doing a std::bind to get all the arguments in place 
> at the call site and then passing that to the synchronize function.
> This appears to require some template magic that isn't always well supported; 
> see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51979.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser

2016-06-28 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10578:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: Silence compile warnings from URI parser
> ---
>
> Key: HDFS-10578
> URL: https://issues.apache.org/jira/browse/HDFS-10578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10578.HDFS-8707.000.patch
>
>
> The URI parser is calling free on buffers that are const qualified and gcc 
> complains.  It had already been complaining about some other stuff that we 
> had a flag for, I'd like to just add a "-w" flag to silence everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser

2016-06-28 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353511#comment-15353511
 ] 

James Clampffer commented on HDFS-10578:


Thanks for reviewing [~xiaowei.zhu].  Committed to HDFS-8707.

> libhdfs++: Silence compile warnings from URI parser
> ---
>
> Key: HDFS-10578
> URL: https://issues.apache.org/jira/browse/HDFS-10578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10578.HDFS-8707.000.patch
>
>
> The URI parser is calling free on buffers that are const qualified and gcc 
> complains.  It had already been complaining about some other stuff that we 
> had a flag for, I'd like to just add a "-w" flag to silence everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10488) WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating files/directories without specifying permissions

2016-06-28 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353453#comment-15353453
 ] 

Wellington Chevreuil commented on HDFS-10488:
-

Anyone has any additional comment on the last patch?

> WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating 
> files/directories without specifying permissions
> --
>
> Key: HDFS-10488
> URL: https://issues.apache.org/jira/browse/HDFS-10488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Affects Versions: 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-10488.002.patch, HDFS-10488.003.patch, 
> HDFS-10488.005.patch, HDFS-10488.patch
>
>
> WebHDFS methods for creating file/directories are always creating it with 755 
> permissions as default, even ignoring any configured 
> *fs.permissions.umask-mode* in the case of directories.
> Dfs CLI, however, applies the configured umask to 777 permission for 
> directories, or 666 permission for files.
> Example below shows the different behaviour when creating directory via CLI 
> and WebHDFS:
> {noformat}
> 1) Creating a directory under '/test/' as 'test-user'. Configured 
> fs.permissions.umask-mode is 000: 
> $ sudo -u test-user hdfs dfs -mkdir /test/test-user1 
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user1 
> # file: /test/test-user1
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::rwx 
> other::rwx 
> 4) Doing the same via WebHDFS does not get the proper ACLs: 
> $ curl -i -X PUT 
> "http://namenode-host:50070/webhdfs/v1/test/test-user2?user.name=test-user=MKDIRS;
>  
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user2 
> # file: /test/test-user2 
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::r-x 
> other::r-x
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6434) Default permission for creating file should be 644 for WebHdfs/HttpFS

2016-06-28 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353451#comment-15353451
 ] 

Wellington Chevreuil commented on HDFS-6434:


Hi, any additional thoughts on my previous comments?

> Default permission for creating file should be 644 for WebHdfs/HttpFS
> -
>
> Key: HDFS-6434
> URL: https://issues.apache.org/jira/browse/HDFS-6434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Juan Yu
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-6434.002.patch, HDFS-6434.patch
>
>
> Creating a file by using WebHdfs without specify permission. file is created 
> with permission 755. it should be 644.
> WebHdfs seems using the same default permission for both file and directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353446#comment-15353446
 ] 

John Zhuge commented on HDFS-6962:
--

{{FSDirMkdirOp.createAncestorDirectories}} is unrelated to the current inode 
being created, thus should not use the current create modes.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10512) VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks

2016-06-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353315#comment-15353315
 ] 

Wei-Chiu Chuang commented on HDFS-10512:


Hi [~linyiqun] much appreciate your patch. The patch itself looks good to me.

However, I have been hesitate to give my non-binding +1, because when this 
method is being called, a block is corrupt. After this patch, VolumeScanner 
will not terminate prematurely, which is good, but it still won't tell NameNode 
to mark this replica corrupt. And that's still a really bad thing to have.

Any comments? [~xyao] or other watchers?
Do you think this patch should go in despite we do not know the root cause of 
NPE?

This is a really bad bug, which causes pipeline to abort, because it will never 
transmit correct block to the downstream pipeline, and pipeline can not 
construct three good replicas.

BTW, I have found the root cause of corrupt replica, and I'll file another jira 
today, but I still think it would be nice to know what causes NPE.

> VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks
> --
>
> Key: HDFS-10512
> URL: https://issues.apache.org/jira/browse/HDFS-10512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10512.001.patch, HDFS-10512.002.patch
>
>
> VolumeScanner may terminate due to unexpected NullPointerException thrown in 
> {{DataNode.reportBadBlocks()}}. This is different from HDFS-8850/HDFS-9190
> I observed this bug in a production CDH 5.5.1 cluster and the same bug still 
> persist in upstream trunk.
> {noformat}
> 2016-04-07 20:30:53,830 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-1800173197-10.204.68.5-125156296:blk_1170134484_96468685 on /dfs/dn
> 2016-04-07 20:30:53,831 ERROR 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting because of exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:1018)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner$ScanResultHandler.handle(VolumeScanner.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:443)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:547)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:621)
> 2016-04-07 20:30:53,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting.
> {noformat}
> I think the NPE comes from the volume variable in the following code snippet. 
> Somehow the volume scanner know the volume, but the datanode can not lookup 
> the volume using the block.
> {code}
> public void reportBadBlocks(ExtendedBlock block) throws IOException{
> BPOfferService bpos = getBPOSForBlock(block);
> FsVolumeSpi volume = getFSDataset().getVolume(block);
> bpos.reportBadBlocks(
> block, volume.getStorageID(), volume.getStorageType());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser

2016-06-28 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353312#comment-15353312
 ] 

Xiaowei Zhu commented on HDFS-10578:


I see. +1 on the patch.

> libhdfs++: Silence compile warnings from URI parser
> ---
>
> Key: HDFS-10578
> URL: https://issues.apache.org/jira/browse/HDFS-10578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10578.HDFS-8707.000.patch
>
>
> The URI parser is calling free on buffers that are const qualified and gcc 
> complains.  It had already been complaining about some other stuff that we 
> had a flag for, I'd like to just add a "-w" flag to silence everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser

2016-06-28 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353299#comment-15353299
 ] 

James Clampffer commented on HDFS-10578:


For the URI library I think we do; we don't have any real control over what's 
going on there and things that are actually errors will still fail the build.

I traced through the code that's complaining about discarding the const 
qualifier and what it does is safe just not great in terms of casting.  Ideally 
we would only silence the warnings about excessive inline and const but I 
couldn't get the flag that was supposed to handle silencing discarded const 
qualifiers working so I went with this approach.  The other option would be 
applying a patch with proper casts (and possibly trying to push it upstream to 
the URI parsing people) but I don't have enough time or experience with the URI 
lib to do that in the short term.

> libhdfs++: Silence compile warnings from URI parser
> ---
>
> Key: HDFS-10578
> URL: https://issues.apache.org/jira/browse/HDFS-10578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10578.HDFS-8707.000.patch
>
>
> The URI parser is calling free on buffers that are const qualified and gcc 
> complains.  It had already been complaining about some other stuff that we 
> had a flag for, I'd like to just add a "-w" flag to silence everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353254#comment-15353254
 ] 

Hadoop QA commented on HDFS-9852:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 43 unchanged - 0 fixed = 45 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789934/HDFS-9852.002.patch |
| JIRA Issue | HDFS-9852 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d5596cd02da1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be38e53 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15937/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15937/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15937/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hdfs dfs -setfacl error message is misleading
> -
>
> Key: HDFS-9852
> URL: https://issues.apache.org/jira/browse/HDFS-9852
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>   

[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353226#comment-15353226
 ] 

Hadoop QA commented on HDFS-10441:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
50s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
21s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814130/HDFS-10441.HDFS-8707.008.patch
 |
| JIRA Issue | HDFS-10441 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux bf89e2cadf9c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / a903f78 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15935/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15935/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> 

[jira] [Commented] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser

2016-06-28 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353197#comment-15353197
 ] 

Xiaowei Zhu commented on HDFS-10578:


Do we actually want to silence everything?

> libhdfs++: Silence compile warnings from URI parser
> ---
>
> Key: HDFS-10578
> URL: https://issues.apache.org/jira/browse/HDFS-10578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10578.HDFS-8707.000.patch
>
>
> The URI parser is calling free on buffers that are const qualified and gcc 
> complains.  It had already been complaining about some other stuff that we 
> had a flag for, I'd like to just add a "-w" flag to silence everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10336) TestBalancer failing intermittently because of not reseting UserGroupInformation completely

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353183#comment-15353183
 ] 

Hadoop QA commented on HDFS-10336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814117/HDFS-10336.002.patch |
| JIRA Issue | HDFS-10336 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 394be24a9414 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15933/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15933/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15933/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestBalancer failing intermittently because of not reseting 
> UserGroupInformation 

[jira] [Commented] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353152#comment-15353152
 ] 

Weiwei Yang commented on HDFS-10583:


Thanks [~rushabh.shah] I just revised the title, hope it is clearer now. Feel 
free the modify it.

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10583) Add links to component's configuration UI page under Utilities dropdown

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10583:
---
Summary: Add links to component's configuration UI page under Utilities 
dropdown  (was: Add Utilities/conf links to HDFS UI)

> Add links to component's configuration UI page under Utilities dropdown
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353144#comment-15353144
 ] 

Hadoop QA commented on HDFS-9852:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 43 unchanged - 0 fixed = 45 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789934/HDFS-9852.002.patch |
| JIRA Issue | HDFS-9852 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a81c73cdf68 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be38e53 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15934/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15934/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15934/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hdfs dfs -setfacl error message is misleading
> -
>
> Key: HDFS-9852
> URL: https://issues.apache.org/jira/browse/HDFS-9852
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>   

[jira] [Commented] (HDFS-10486) "Cannot start secure datanode with unprivileged HTTP ports" should give config param

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353126#comment-15353126
 ] 

Hadoop QA commented on HDFS-10486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808540/HDFS-10486.001.patch |
| JIRA Issue | HDFS-10486 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9538e5c55d2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 23c3ff8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15932/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15932/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15932/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "Cannot start secure datanode with 

[jira] [Commented] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353094#comment-15353094
 ] 

Rushabh S Shah commented on HDFS-10583:
---

bq. Does that make sense ?
makes sense. I didn't understand the whole jira.
Thanks for the explaining.

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353074#comment-15353074
 ] 

Weiwei Yang edited comment on HDFS-10583 at 6/28/16 2:32 PM:
-

Hello [~rushabh.shah]

Yes they already exist, we want to add a link to that from UI, so the propose 
is to put it under {{Utilities -> Configuration}}. This one is created based on 
the [~vinayrpet]'s comment  
[here|https://issues.apache.org/jira/browse/HDFS-10440?focusedCommentId=15352354=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15352354].
 You can also take a look at this [screen 
shot|https://issues.apache.org/jira/secure/attachment/12808394/datanode_utilities.002.jpg].
 Does that make sense ?


was (Author: cheersyang):
Hello [~rushabh.shah]

Yes they already exist, we want to add a link to that from UI, so the propose 
is to put it under {{ Utilities -> Configuration }}. This one is created based 
on the [~vinayrpet]'s comment  
[here|https://issues.apache.org/jira/browse/HDFS-10440?focusedCommentId=15352354=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15352354].
 You can also take a look at this [screen 
shot|https://issues.apache.org/jira/secure/attachment/12808394/datanode_utilities.002.jpg].
 Does that make sense ?

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353074#comment-15353074
 ] 

Weiwei Yang commented on HDFS-10583:


Hello [~rushabh.shah]

Yes they already exist, we want to add a link to that from UI, so the propose 
is to put it under {{ Utilities -> Configuration }}. This one is created based 
on the [~vinayrpet]'s comment  
[here|https://issues.apache.org/jira/browse/HDFS-10440?focusedCommentId=15352354=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15352354].
 You can also take a look at this [screen 
shot|https://issues.apache.org/jira/secure/attachment/12808394/datanode_utilities.002.jpg].
 Does that make sense ?

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-06-28 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Attachment: HDFS-10441.HDFS-8707.008.patch

Uploaded patch 008 to try and get CI to do something.  It's the same as 007.

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353056#comment-15353056
 ] 

Allen Wittenauer commented on HDFS-10210:
-

Changing summary since startKdc is still (optionally) used by hadoop-common. :)

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10210:

Summary: Remove the defunct startKdc profile from hdfs  (was: Remove the 
defunct startKdc profile)

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353051#comment-15353051
 ] 

Weiwei Yang commented on HDFS-10440:


Thanks a lot [~vinayrpet], [~kihwal]

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> HDFS-10440.009.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10436) dfs.block.access.token.enable should default on when security is !simple

2016-06-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353001#comment-15353001
 ] 

Yiqun Lin commented on HDFS-10436:
--

Hi, [~aw], the comments that I gave before seemed not correct. I am sorry for 
that. The dfs.block.access.token.enable default on will let the method 
{{createBlockTokenSecretManager}} return a new {{BlockTokenSecretManager}} and 
then it means that BlockToken was enabled in {{BlockManager}}. It will change 
the existed logic.

> dfs.block.access.token.enable should default on when security is !simple
> 
>
> Key: HDFS-10436
> URL: https://issues.apache.org/jira/browse/HDFS-10436
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Yiqun Lin
> Attachments: HDFS-10436.001.patch
>
>
> Unless there is a valid configuration where dfs.block.access.token.enable is 
> off and security is on, then rather than shutdown we should just enable the 
> block access tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352980#comment-15352980
 ] 

Rushabh S Shah edited comment on HDFS-10583 at 6/28/16 1:28 PM:


I think we are already exposing this configuration properties.
For namenode, it is located at /conf
For datanode, it is located at /conf


was (Author: shahrs87):
I think we are already exposing this configuration.
For namenode, it is located at /conf
For datanode, it is located at /conf

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352980#comment-15352980
 ] 

Rushabh S Shah commented on HDFS-10583:
---

I think we are already exposing this configuration.
For namenode, it is located at /conf
For datanode, it is located at /conf

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10336) TestBalancer failing intermittently because of not reseting UserGroupInformation completely

2016-06-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352976#comment-15352976
 ] 

Yiqun Lin edited comment on HDFS-10336 at 6/28/16 1:25 PM:
---

Thanks [~rakeshr] for review. 
{quote}
Increasing timeout is one approach, but am interested to know the reason behind 
30millis timeout. Did you see any specific case for exceeding the current 
value?
{quote}
I tested many times in my local, it seems good and runs quickly. I'm not so 
sure for the case that exceeding the 30s now. Post the patch for addressing 
your comments.


was (Author: linyiqun):
Thanks [~rakeshr] for review. 
{quote}
Increasing timeout is one approach, but am interested to know the reason behind 
30millis timeout. Did you see any specific case for exceeding the current 
value?
{quote}
I tested many times in my local, it seems good and runs quickly. I'm not so 
sure for the case that exceeding the 30s now. Post the patch for your the 
comments.

> TestBalancer failing intermittently because of not reseting 
> UserGroupInformation completely
> ---
>
> Key: HDFS-10336
> URL: https://issues.apache.org/jira/browse/HDFS-10336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10336.001.patch, HDFS-10336.002.patch
>
>
> The unit test {{TestBalancer}} failed sometimes. 
> I looked for the reason. I found two main reasons causing this.
> * 1st. The test {{TestBalancer#testBalancerWithKeytabs}} executed timeout.
> {code}
> org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithKeytabs(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
> Time elapsed: 300.41 sec  <<< ERROR!
> java.lang.Exception: test timed out after 30 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.waitForMoveCompletion(Dispatcher.java:1122)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchBlockMoves(Dispatcher.java:1096)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchAndCheckContinue(Dispatcher.java:1060)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.runOneIteration(Balancer.java:635)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:689)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:125)
> {code}
> * 2nd. The test {{TestBalancer#testBalancerWithKeytabs}} reset the {{UGI}} 
> not completely sometimes in the finally block. And this affected the other 
> unit tests threw {{IOException}}, like this:
> {code}
> testBalancerWithNonZeroThreadsForMove(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 0 sec  <<< ERROR!
> java.io.IOException: Running in secure mode, but config doesn't have a keytab
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:300)
> {code}
> And there were not only one test will be affected by this. We should add a 
> line to do before doing reset {{UGI}} operation and can avoid the potenial 
> exception happens.
> {code}
> UserGroupInformation.reset();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10336) TestBalancer failing intermittently because of not reseting UserGroupInformation completely

2016-06-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10336:
-
Attachment: HDFS-10336.002.patch

Thanks [~rakeshr] for review. 
{quote}
Increasing timeout is one approach, but am interested to know the reason behind 
30millis timeout. Did you see any specific case for exceeding the current 
value?
{quote}
I tested many times in my local, it seems good and runs quickly. I'm not so 
sure for the case that exceeding the 30s now. Post the patch for your the 
comments.

> TestBalancer failing intermittently because of not reseting 
> UserGroupInformation completely
> ---
>
> Key: HDFS-10336
> URL: https://issues.apache.org/jira/browse/HDFS-10336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10336.001.patch, HDFS-10336.002.patch
>
>
> The unit test {{TestBalancer}} failed sometimes. 
> I looked for the reason. I found two main reasons causing this.
> * 1st. The test {{TestBalancer#testBalancerWithKeytabs}} executed timeout.
> {code}
> org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithKeytabs(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
> Time elapsed: 300.41 sec  <<< ERROR!
> java.lang.Exception: test timed out after 30 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.waitForMoveCompletion(Dispatcher.java:1122)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchBlockMoves(Dispatcher.java:1096)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchAndCheckContinue(Dispatcher.java:1060)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.runOneIteration(Balancer.java:635)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:689)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:125)
> {code}
> * 2nd. The test {{TestBalancer#testBalancerWithKeytabs}} reset the {{UGI}} 
> not completely sometimes in the finally block. And this affected the other 
> unit tests threw {{IOException}}, like this:
> {code}
> testBalancerWithNonZeroThreadsForMove(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 0 sec  <<< ERROR!
> java.io.IOException: Running in secure mode, but config doesn't have a keytab
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:300)
> {code}
> And there were not only one test will be affected by this. We should add a 
> line to do before doing reset {{UGI}} operation and can avoid the potenial 
> exception happens.
> {code}
> UserGroupInformation.reset();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10436) dfs.block.access.token.enable should default on when security is !simple

2016-06-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352958#comment-15352958
 ] 

Allen Wittenauer commented on HDFS-10436:
-

Oh, so if block token access is enabled on an insecure cluster, it doesn't 
actually do anything?  If so, then let's add that to description in 
hdfs-site.xml so that admins understand why this is default to true.

> dfs.block.access.token.enable should default on when security is !simple
> 
>
> Key: HDFS-10436
> URL: https://issues.apache.org/jira/browse/HDFS-10436
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Yiqun Lin
> Attachments: HDFS-10436.001.patch
>
>
> Unless there is a valid configuration where dfs.block.access.token.enable is 
> off and security is on, then rather than shutdown we should just enable the 
> block access tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10436) dfs.block.access.token.enable should default on when security is !simple

2016-06-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10436:

Hadoop Flags: Incompatible change

> dfs.block.access.token.enable should default on when security is !simple
> 
>
> Key: HDFS-10436
> URL: https://issues.apache.org/jira/browse/HDFS-10436
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Yiqun Lin
> Attachments: HDFS-10436.001.patch
>
>
> Unless there is a valid configuration where dfs.block.access.token.enable is 
> off and security is on, then rather than shutdown we should just enable the 
> block access tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10570) Netty-all jar should be first in class path while running tests in eclipse

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352947#comment-15352947
 ] 

Hadoop QA commented on HDFS-10570:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814091/HDFS-10570-01.patch |
| JIRA Issue | HDFS-10570 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 9982d58dfb9a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a0082c |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15931/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15931/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15931/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Netty-all jar should be first in class path while running tests in eclipse
> --
>
> Key: HDFS-10570
> URL: https://issues.apache.org/jira/browse/HDFS-10570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HDFS-10570-01.patch
>
>
> While debugging tests in eclipse, Cannot access DN http url. 
> Also WebHdfs tests cannot run in 

[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352825#comment-15352825
 ] 

Hudson commented on HDFS-10440:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10025 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10025/])
HDFS-10440. Improve DataNode web UI (Contributed by Weiwei Yang) (vinayakumarb: 
rev 2a0082c51da7cbe2770eddb5f72cd7f8d72fa5f6)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/dn.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java


> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> HDFS-10440.009.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10570) Netty-all jar should be first in class path while running tests in eclipse

2016-06-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10570:
-
Affects Version/s: 2.8.0
   Status: Patch Available  (was: Open)

> Netty-all jar should be first in class path while running tests in eclipse
> --
>
> Key: HDFS-10570
> URL: https://issues.apache.org/jira/browse/HDFS-10570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HDFS-10570-01.patch
>
>
> While debugging tests in eclipse, Cannot access DN http url. 
> Also WebHdfs tests cannot run in eclipse due to classes loading from old 
> version of netty jars instead of netty-all jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10570) Netty-all jar should be first in class path while running tests in eclipse

2016-06-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10570:
-
Attachment: HDFS-10570-01.patch

Attached the patch to change the order

> Netty-all jar should be first in class path while running tests in eclipse
> --
>
> Key: HDFS-10570
> URL: https://issues.apache.org/jira/browse/HDFS-10570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HDFS-10570-01.patch
>
>
> While debugging tests in eclipse, Cannot access DN http url. 
> Also WebHdfs tests cannot run in eclipse due to classes loading from old 
> version of netty jars instead of netty-all jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10440:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
 Release Note: DataNode Web UI has been improved with new HTML5 page, 
showing useful information.
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8.

Thanks for the Contribution [~cheersyang]. 
Thanks for reviews [~kihwal].

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> HDFS-10440.009.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352809#comment-15352809
 ] 

Vinayakumar B commented on HDFS-10440:
--

Latest patch looks good to me 
+1.
Committing based on [~kihwal]'s earlier +1.

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> HDFS-10440.009.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352807#comment-15352807
 ] 

Hadoop QA commented on HDFS-10580:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814077/HDFS-10580.002.patch |
| JIRA Issue | HDFS-10580 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bed7216f8bb8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fd37ee |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15930/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15930/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15930/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
> --
>

[jira] [Commented] (HDFS-10336) TestBalancer failing intermittently because of not reseting UserGroupInformation completely

2016-06-28 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352687#comment-15352687
 ] 

Rakesh R commented on HDFS-10336:
-

Thanks [~linyiqun] for the contribution. Could you please rebase the patch as 
HADOOP-13251 has modified {{UserGroupInformation#reset}} code.

bq. 1st. The test TestBalancer#testBalancerWithKeytabs executed timeout.
Increasing timeout is one approach, but am interested to know the reason behind 
30millis timeout. Did you see any specific case for exceeding the current 
value?
bq. 2nd. The test TestBalancer#testBalancerWithKeytabs reset the UGI not 
completely sometimes in the finally block.
+1 for {{UserGroupInformation.reset();}}


> TestBalancer failing intermittently because of not reseting 
> UserGroupInformation completely
> ---
>
> Key: HDFS-10336
> URL: https://issues.apache.org/jira/browse/HDFS-10336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10336.001.patch
>
>
> The unit test {{TestBalancer}} failed sometimes. 
> I looked for the reason. I found two main reasons causing this.
> * 1st. The test {{TestBalancer#testBalancerWithKeytabs}} executed timeout.
> {code}
> org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithKeytabs(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
> Time elapsed: 300.41 sec  <<< ERROR!
> java.lang.Exception: test timed out after 30 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.waitForMoveCompletion(Dispatcher.java:1122)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchBlockMoves(Dispatcher.java:1096)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchAndCheckContinue(Dispatcher.java:1060)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.runOneIteration(Balancer.java:635)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:689)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:125)
> {code}
> * 2nd. The test {{TestBalancer#testBalancerWithKeytabs}} reset the {{UGI}} 
> not completely sometimes in the finally block. And this affected the other 
> unit tests threw {{IOException}}, like this:
> {code}
> testBalancerWithNonZeroThreadsForMove(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 0 sec  <<< ERROR!
> java.io.IOException: Running in secure mode, but config doesn't have a keytab
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:300)
> {code}
> And there were not only one test will be affected by this. We should add a 
> line to do before doing reset {{UGI}} operation and can avoid the potenial 
> exception happens.
> {code}
> UserGroupInformation.reset();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info

2016-06-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10580:
-
Attachment: HDFS-10580.002.patch

> DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
> --
>
> Key: HDFS-10580
> URL: https://issues.apache.org/jira/browse/HDFS-10580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10580.001.patch, HDFS-10580.002.patch
>
>
> There are two unused method {{skipVolume}} and {{printQueue}} in class 
> {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not 
> used. In these two method, it will print the detail debug info. So We can 
> make use of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info

2016-06-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352680#comment-15352680
 ] 

Yiqun Lin commented on HDFS-10580:
--

Thanks [~anu] for review. I run some tests I'm my local, it seems a little 
noisy for me. But I think these infos will be useful for us when we want to 
debug in this class. I found another problem in output infos. The output infos 
were not ordered. Sometimes there will be two lines of First Volume, and then 
will be Last Volume info. Like these:
{code}
2016-06-28 17:27:30,739 [pool-1-thread-1] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(263)) - First Volume : /tmp/disk/hIDn1xAOE0, 
DataDensity : 0.439100
2016-06-28 17:27:30,739 [pool-1-thread-3] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(263)) - First Volume : /tmp/disk/xH5Gyutu4r, 
DataDensity : 0.017900
2016-06-28 17:27:30,740 [pool-1-thread-1] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(268)) - Last Volume : /tmp/disk/lbAmdQf3Zl, 
DataDensity : -0.170200

2016-06-28 17:27:30,740 [pool-1-thread-3] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(268)) - Last Volume : /tmp/disk/ZFGQuCn4Y2, 
DataDensity : -0.128100

2016-06-28 17:27:30,740 [pool-1-thread-1] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(263)) - First Volume : /tmp/disk/noTvhjLIXR, 
DataDensity : 0.035100
2016-06-28 17:27:30,740 [pool-1-thread-3] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(263)) - First Volume : /tmp/disk/DD1sDuwvA4, 
DataDensity : 0.000100
{code}
We can reduce these two line into one line and it will reduce the output lines. 
The new output from my test result:
{code}
2016-06-28 17:22:56,516 [pool-1-thread-3] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(266)) - First Volume : /tmp/disk/xH5Gyutu4r, 
DataDensity : 0.104200, Last Volume : /tmp/disk/DD1sDuwvA4, DataDensity : 
-0.532700
2016-06-28 17:22:56,516 [pool-1-thread-1] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(266)) - First Volume : /tmp/disk/noTvhjLIXR, 
DataDensity : 0.035100, Last Volume : /tmp/disk/lbAmdQf3Zl, DataDensity : 
-0.045200
2016-06-28 17:22:56,516 [pool-1-thread-3] INFO  planner.GreedyPlanner 
(GreedyPlanner.java:printQueue(266)) - First Volume : /tmp/disk/xH5Gyutu4r, 
DataDensity : 0.017900, Last Volume : /tmp/disk/ZFGQuCn4Y2, DataDensity : 
-0.128100
{code}
Post a new patch for fixing this.

> DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
> --
>
> Key: HDFS-10580
> URL: https://issues.apache.org/jira/browse/HDFS-10580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10580.001.patch
>
>
> There are two unused method {{skipVolume}} and {{printQueue}} in class 
> {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not 
> used. In these two method, it will print the detail debug info. So We can 
> make use of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352624#comment-15352624
 ] 

Hadoop QA commented on HDFS-10440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 218 unchanged - 1 fixed = 218 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814056/HDFS-10440.009.patch |
| JIRA Issue | HDFS-10440 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7df31281f1fd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fd37ee |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15929/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15929/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15929/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> 

[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: HDFS-10440.009.patch

Fixed check style warnings in v9 patch.

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> HDFS-10440.009.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352486#comment-15352486
 ] 

Hadoop QA commented on HDFS-10440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 219 unchanged - 0 fixed = 221 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814019/HDFS-10440.008.patch |
| JIRA Issue | HDFS-10440 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b61daa4901b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fd37ee |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15927/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15927/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15927/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15927/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: 

  1   2   >