[jira] [Commented] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383681#comment-15383681
 ] 

Vinayakumar B commented on HDFS-10652:
--

Thanks [~yzhangal]. 
test given earlier was not of commit quality. 
May be we can refine the test to be able to commit. 
Like, removing "{{System.out.println("VINAY : read : "+count);}}"  and some 
other improvements if required. :)

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
> Attachments: HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Status: Patch Available  (was: Open)

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
> Attachments: HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383679#comment-15383679
 ] 

Yongjun Zhang commented on HDFS-10652:
--

Thanks [~vinayrpet] a lot for the help to create the testcase for the issue 
reported in HDFS-10587. Attaching it as rev 001 here, since it really creates 
one scenario we found in HDFS-10587, which got fixed by HDFS-4660. That is, 
reverting HDFS-4660, HDFS-9220, and HDFS-8722, the test failed as expected. So 
this test makes a perfect unit test for HDFS-4660.





> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
> Attachments: HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: HDFS-10652.001.patch

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
> Attachments: HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: (was: HDFS-10587-test.patch)

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: HDFS-10587-test.patch

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
> Attachments: HDFS-10587-test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-18 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10652:


 Summary: Add a unit test for HDFS-4660
 Key: HDFS-10652
 URL: https://issues.apache.org/jira/browse/HDFS-10652
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Reporter: Yongjun Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10603) Flaky test org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint

2016-07-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383654#comment-15383654
 ] 

Yongjun Zhang commented on HDFS-10603:
--

Thanks for working on this [~linyiqun], nice catch! Thanks [~ajisakaa] for the 
review, My +1 too. 


> Flaky test 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint
> ---
>
> Key: HDFS-10603
> URL: https://issues.apache.org/jira/browse/HDFS-10603
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Attachments: HDFS-10603.001.patch, HDFS-10603.002.patch
>
>
> Test 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint
> may fail intermittently as
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 63.386 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
> testWithCheckpoint(org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot)
>   Time elapsed: 15.092 sec  <<< ERROR!
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1363)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2041)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint(TestOpenFilesWithSnapshot.java:94)
> Results :
> Tests in error: 
>   TestOpenFilesWithSnapshot.testWithCheckpoint:94 » IO Timed out waiting for 
> Min...
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Attachment: (was: HDFS-10633.003.patch)

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Attachment: HDFS-10633.003.patch

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383530#comment-15383530
 ] 

Yiqun Lin edited comment on HDFS-10633 at 7/19/16 6:07 AM:
---

Thanks [~ajisakaa] and [~eddyxu] for the review. Post a new patch to repharse 
this setence, also update this in file hdfs-default.xml.


was (Author: linyiqun):
Thanks [~ajisakaa] and [~eddyxu] for the review. Post a new patch to repharse 
this setence.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10169) TestEditLog.testBatchedSyncWithClosedLogs with useAsyncEditLog sometimes fails

2016-07-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383590#comment-15383590
 ] 

Rakesh R commented on HDFS-10169:
-

[~cnauroth], I think the test case failure reason is different from HDFS-10183. 
Please let me know your comments in fixing this.

> TestEditLog.testBatchedSyncWithClosedLogs with useAsyncEditLog sometimes fails
> --
>
> Key: HDFS-10169
> URL: https://issues.apache.org/jira/browse/HDFS-10169
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Rakesh R
> Attachments: HDFS-10169-00.patch, HDFS-10169-01.patch
>
>
> This failure has been seen multiple precomit builds recently.
> {noformat}
> testBatchedSyncWithClosedLogs[1](org.apache.hadoop.hdfs.server.namenode.TestEditLog)
>   Time elapsed: 0.377 sec  <<< FAILURE!
> java.lang.AssertionError: logging edit without syncing should do not affect 
> txid expected:<1> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8065) Erasure coding: Support truncate at striped group boundary

2016-07-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383586#comment-15383586
 ] 

Rakesh R commented on HDFS-8065:


Agreed, will come back to this jira once HDFS-7622 design is concluded. Anyway 
this is postponed to {{3.0.0-alpha2}}.

> Erasure coding: Support truncate at striped group boundary
> --
>
> Key: HDFS-8065
> URL: https://issues.apache.org/jira/browse/HDFS-8065
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Rakesh R
> Attachments: HDFS-8065-00.patch, HDFS-8065-01.patch
>
>
> We can support truncate at striped group boundary firstly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383567#comment-15383567
 ] 

Hadoop QA commented on HDFS-10425:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 3 unchanged - 59 fixed = 5 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m  
7s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817690/HDFS-10425.02.patch |
| JIRA Issue | HDFS-10425 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f12d8a90ad5b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16086/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16086/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16086/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachm

[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383535#comment-15383535
 ] 

Hadoop QA commented on HDFS-10633:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818735/HDFS-10633.003.patch |
| JIRA Issue | HDFS-10633 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e428ab9ccbd7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16088/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383530#comment-15383530
 ] 

Yiqun Lin commented on HDFS-10633:
--

Thanks [~ajisakaa] and [~eddyxu] for the review. Post a new patch to repharse 
this setence.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Attachment: HDFS-10633.003.patch

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch, 
> HDFS-10633.003.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383519#comment-15383519
 ] 

Hadoop QA commented on HDFS-10647:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818731/HDFS-10647.001.patch |
| JIRA Issue | HDFS-10647 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux e8c178ede922 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16087/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
> Attachments: HDFS-10647.001.patch
>
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10626) VolumeScanner prints incorrect IOException in reportBadBlocks operation

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383512#comment-15383512
 ] 

Hadoop QA commented on HDFS-10626:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestEditLog |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818717/HDFS-10626.004.patch |
| JIRA Issue | HDFS-10626 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09b3d0d1cbd4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16085/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16085/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16085/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeScanner prints incorrect IOException in reportBadBlocks operation
> -

[jira] [Updated] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10647:
-
Attachment: HDFS-10647.001.patch

> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
> Attachments: HDFS-10647.001.patch
>
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10647:
-
Status: Patch Available  (was: Open)

> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383501#comment-15383501
 ] 

Yiqun Lin commented on HDFS-10647:
--

Thanks for reporting, [~ajisakaa]. Attach a patch for this.

> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-10647:


Assignee: Yiqun Lin

> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: newbie
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10603) Flaky test org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383488#comment-15383488
 ] 

Akira Ajisaka commented on HDFS-10603:
--

Nice catch [~linyiqun], +1 for the latest patch. I'll commit this tomorrow if 
there are no objections.

> Flaky test 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint
> ---
>
> Key: HDFS-10603
> URL: https://issues.apache.org/jira/browse/HDFS-10603
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Attachments: HDFS-10603.001.patch, HDFS-10603.002.patch
>
>
> Test 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint
> may fail intermittently as
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 63.386 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
> testWithCheckpoint(org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot)
>   Time elapsed: 15.092 sec  <<< ERROR!
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1363)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2041)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testWithCheckpoint(TestOpenFilesWithSnapshot.java:94)
> Results :
> Tests in error: 
>   TestOpenFilesWithSnapshot.testWithCheckpoint:94 » IO Timed out waiting for 
> Min...
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383485#comment-15383485
 ] 

Akira Ajisaka commented on HDFS-10620:
--

+1 for the 002 patch. I ran the failed tests locally and all of them passed.

> StringBuilder created and appended even if logging is disabled
> --
>
> Key: HDFS-10620
> URL: https://issues.apache.org/jira/browse/HDFS-10620
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.4
>Reporter: Staffan Friberg
> Attachments: HDFS-10620.001.patch, HDFS-10620.002.patch
>
>
> In BlockManager.addToInvalidates the StringBuilder is appended to during the 
> delete even if logging isn't active.
> Could avoid allocating the StringBuilder as well, but not sure if it is 
> really worth it to add null handling in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10643) HDFS namenode should always use service user (hdfs) to generateEncryptedKey

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383468#comment-15383468
 ] 

Hadoop QA commented on HDFS-10643:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 
21s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818706/HDFS-10643.01.patch |
| JIRA Issue | HDFS-10643 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9d10856cefc2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16084/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16084/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16084/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HDFS namenode should always use service user (hdfs) to generateEncryptedKey
> ---
>
> Key: HDFS-10643
> URL: https://issues.apache.org/jira/browse/HDFS-10

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383461#comment-15383461
 ] 

Konstantin Shvachko commented on HDFS-10301:


> adding a list of storage IDs to the block report RPC by making a 
> backwards-compatible protobuf change.

The storage ids are already there in current BR protobuf. Why would you want a 
new field for that. You will need to duplicate all storage ids in case of full 
block report, when it is not split into multiple RPCs. Seems confusing and 
inefficient to me.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383454#comment-15383454
 ] 

Hadoop QA commented on HDFS-10620:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818678/HDFS-10620.002.patch |
| JIRA Issue | HDFS-10620 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b17a581caa1d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16083/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16083/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16083/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> StringBuilder created and appended even if logging is disabled
> --
>
>   

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383447#comment-15383447
 ] 

Hadoop QA commented on HDFS-10301:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 368 unchanged - 12 fixed = 370 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 
58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818684/HDFS-10301.010.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d118fdcd3ae4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16082/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16082/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16082/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
>

[jira] [Commented] (HDFS-10643) HDFS namenode should always use service user (hdfs) to generateEncryptedKey

2016-07-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383432#comment-15383432
 ] 

Xiao Chen commented on HDFS-10643:
--

Thanks [~xyao] for opening the issue and the patch.
I think the idea makes sense, since from HDFS perspective the only user needs 
to generate EDEK is {{hdfs}}. Ping [~andrew.wang] for awareness.


Regarding {{checkTGTAndReloginFromKeytab}}, you're absolutely right that we 
don't need it in the client code here. I think adding it to 
{{KerberosAuthencitator}} makes sense logically, and in that case we don't need 
these in DTA any more. 
{code}
  public void authenticate(URL url, AuthenticatedURL.Token token)
  throws IOException, AuthenticationException {
if (!hasDelegationToken(url, token)) {
  // check and renew TGT to handle potential expiration
  UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
  authenticator.authenticate(url, token);
}
  }
{code}
I didn't put it there in HADOOP-13255 because KA is in hadoop-auth component, 
while DTA and UGI are both in hadoop-common. Feels like we'll need a dependency 
between the two in order to add this... Let's follow up on this in the separate 
jira.

> HDFS namenode should always use service user (hdfs) to generateEncryptedKey
> ---
>
> Key: HDFS-10643
> URL: https://issues.apache.org/jira/browse/HDFS-10643
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10643.00.patch, HDFS-10643.01.patch
>
>
> KMSClientProvider is designed to be shared by different KMS clients. When 
> HDFS Namenode as KMS client talks to KMS to generateEncryptedKey for new file 
> creation from proxy user (hive, oozie), the proxyuser handling for 
> KMSClientProvider in this case is unnecessary, which cause 1) an extra proxy 
> user configuration allowing hdfs user to proxy its clients and 2) KMS acls to 
> allow non-hdfs user for GENERATE_EEK operation. 
> This ticket is opened to always use HDFS namenode login user (hdfs) when 
> talking to KMS to generateEncryptedKey for new file creation. This way, we 
> have a more secure KMS based HDFS encryption (we can set kms-acls to allow 
> only hdfs user for GENERATE_EEK) with less configuration hassle for KMS to 
> allow hdfs to proxy other users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383413#comment-15383413
 ] 

Colin P. McCabe commented on HDFS-10301:


--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
{code}
@@ -308,10 +308,10 @@ public synchronized boolean checkLease(DatanodeDescriptor 
dn,
   return false;
 }
 if (node.leaseId == 0) {
-  LOG.warn("BR lease 0x{} is not valid for DN {}, because the DN " +
-   "is not in the pending set.",
-   Long.toHexString(id), dn.getDatanodeUuid());
-  return false;
+  LOG.debug("DN {} is not in the pending set because BR with "
+  + "lease 0x{} was processed out of order",
+  dn.getDatanodeUuid(), Long.toHexString(id));
+  return true;
 }
{code}

There are other reasons why {{node.leaseId}} might be 0, besides block reports 
getting processed out of order.  For example, an RPC could have gotten 
duplicated by something in the network.  Let's not change the existing error 
message.

{code}
StorageBlockReport[] lastSplitReport =
new StorageBlockReport[perVolumeBlockLists.size()];
// When block reports are split, the last RPC in the block report
// has the information about all storages in the block report.
// See HDFS-10301 for more details. To achieve this, the last RPC
// has 'n' storage reports, where 'n' is the number of storages in
// a DN. The actual block replicas are reported only for the
// last/n-th storage.
{code}
Why do we have to use such a complex and confusing approach?  Like I commented 
earlier, a report of the existing storages is not the same as a block report.  
Why are we creating {{BlockListAsLongs}} objects that aren't lists of blocks?

There is a much simpler approach, which is just adding a list of storage IDs to 
the block report RPC by making a backwards-compatible protobuf change.  It's 
really easy:

{code}
+repeated String allStorageIds = 8;
{code}

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8901) Use ByteBuffer in striping positional read

2016-07-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383408#comment-15383408
 ] 

Kai Zheng commented on HDFS-8901:
-

Since HDFS-10548 was in this will need to be rebased. [~hayabusa], would you 
help with this? Thanks!

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-8901-v10.patch, HDFS-8901-v2.patch, 
> HDFS-8901-v3.patch, HDFS-8901-v4.patch, HDFS-8901-v5.patch, 
> HDFS-8901-v6.patch, HDFS-8901-v7.patch, HDFS-8901-v8.patch, 
> HDFS-8901-v9.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10619) Cache path in InodesInPath

2016-07-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383404#comment-15383404
 ] 

Yiqun Lin commented on HDFS-10619:
--

Hi, [~daryn], I have one another minor comment. Can we also replace 
{{DFSUtil.byteArray2PathString(path)}} to {{pathname}} in the method 
{{INodesInPath#toString}}? There is no need to parse the path again here.
{code}
  private String toString(boolean vaildateObject) {
if (vaildateObject) {
  validate();
}

final StringBuilder b = new StringBuilder(getClass().getSimpleName())
.append(": path = ").append(DFSUtil.byteArray2PathString(path))
.append("\n  inodes = ");
...
{code}

> Cache path in InodesInPath
> --
>
> Key: HDFS-10619
> URL: https://issues.apache.org/jira/browse/HDFS-10619
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10619.patch
>
>
> INodesInPath#getPath, a frequently called method, dynamically builds the 
> path.  IIP should cache the path upon construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-18 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-10651:


 Summary: Clean up some configuration related codes about legacy 
block reader
 Key: HDFS-10651
 URL: https://issues.apache.org/jira/browse/HDFS-10651
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor


HDFS-10548 removed the legacy block reader. This is to clean up the 
configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10626) VolumeScanner prints incorrect IOException in reportBadBlocks operation

2016-07-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10626:
-
Attachment: HDFS-10626.004.patch

Thanks [~yzhangal] for the review, post a new patch for addressing the comments.

> VolumeScanner prints incorrect IOException in reportBadBlocks operation
> ---
>
> Key: HDFS-10626
> URL: https://issues.apache.org/jira/browse/HDFS-10626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10626.001.patch, HDFS-10626.002.patch, 
> HDFS-10626.003.patch, HDFS-10626.004.patch
>
>
> VolumeScanner throws incorrect IOException in {{datanode.reportBadBlocks}}. 
> The related codes:
> {code}
> public void handle(ExtendedBlock block, IOException e) {
>   FsVolumeSpi volume = scanner.volume;
>   ...
>   try {
> scanner.datanode.reportBadBlocks(block, volume);
>   } catch (IOException ie) {
> // This is bad, but not bad enough to shut down the scanner.
> LOG.warn("Cannot report bad " + block.getBlockId(), e);
>   }
> }
> {code}
> The IOException that printed in the log should be {{ie}} rather than {{e}} 
> which was passed in method {{handle(ExtendedBlock block, IOException e)}}.
> It will be a important info that can help us to know why datanode 
> reporBadBlocks failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383376#comment-15383376
 ] 

John Zhuge commented on HDFS-10650:
---

In {{trunk}} branch, things were consolidated into this private method 
{{DFSClient#applyUMask}}:
{code:java}
  private FsPermission applyUMask(FsPermission permission) {
if (permission == null) {
  permission = FsPermission.getFileDefault();
}
return permission.applyUMask(dfsClientConf.getUMask());
  }
{code}

However, same problem when this is called by {{mkdirs}} and {{primitiveMkdir}}.

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-18 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10650:
-

 Summary: DFSClient#mkdirs and DFSClient#primitiveMkdir should use 
default directory permission
 Key: HDFS-10650
 URL: https://issues.apache.org/jira/browse/HDFS-10650
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


These 2 DFSClient methods should use default directory permission to create a 
directory.
{code:java}
  public boolean mkdirs(String src, FsPermission permission,
  boolean createParent) throws IOException {
if (permission == null) {
  permission = FsPermission.getDefault();
}
{code}
{code:java}
  public boolean primitiveMkdir(String src, FsPermission absPermission, 
boolean createParent)
throws IOException {
checkOpen();
if (absPermission == null) {
  absPermission = 
FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
} 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-18 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-10649:

Assignee: Chen Liang

> Remove unused PermissionStatus#applyUMask
> -
>
> Key: HDFS-10649
> URL: https://issues.apache.org/jira/browse/HDFS-10649
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Chen Liang
>Priority: Trivial
>  Labels: newbie
>
> Class {{PermissionStatus}} is LimitedPrivate("HDFS", "MapReduce") and 
> Unstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10649:
--
Description: Class {{PermissionStatus}} is LimitedPrivate("HDFS", 
"MapReduce") and Unstable.  (was: Class {{PermissionStatus}} is 
LimitedPrivate({"HDFS", "MapReduce"}) and Unstable.)

> Remove unused PermissionStatus#applyUMask
> -
>
> Key: HDFS-10649
> URL: https://issues.apache.org/jira/browse/HDFS-10649
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Trivial
>  Labels: newbie
>
> Class {{PermissionStatus}} is LimitedPrivate("HDFS", "MapReduce") and 
> Unstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-18 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10649:
-

 Summary: Remove unused PermissionStatus#applyUMask
 Key: HDFS-10649
 URL: https://issues.apache.org/jira/browse/HDFS-10649
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: John Zhuge
Priority: Trivial


Class {{PermissionStatus}} is LimitedPrivate({"HDFS", "MapReduce"}) and 
Unstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383330#comment-15383330
 ] 

Chris Nauroth commented on HDFS-6962:
-

bq. One additional question before responding to your comments. I added 
getMasked and getUnmasked with default implementations to FsPermission which is 
public and stable. Is that ok? The alternative to this approach is to use 
instanceof to detect FsCreateModes object with an FsPermission reference.

Adding new methods to a public/stable class is acceptable according to [Apache 
Hadoop 
Compatibility|http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/Compatibility.html]
 guidelines.  We took a similar approach when adding the ACL bit.  We added 
{{FsPermission#getAclBit}} with a default implementation.  The HDFS-specific 
{{FsPermissionExtension}} subclass overrides that method.

bq. I think it is ok. Will it affect our plan to backport the fix to CDH 
branches based on 2.6.0?

I can't comment definitively on CDH concerns, but I expect that any distro 
could make the choice to apply the patch to prior maintenance lines if they 
come to a different risk assessment decision.  The ACL code changes 
infrequently at this point, so I expect it would be trivial to backport, with 
low likelihood of complex merge conflicts.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10643) HDFS namenode should always use service user (hdfs) to generateEncryptedKey

2016-07-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10643:
--
Attachment: HDFS-10643.01.patch

{{checkTGTAndReloginFromKeytab}} is not needed with HADOOP-13255 per discussion 
with [~jnp]. Adding a patch v1 for that. 
I'm working on the unit test of this and will update the patch again later. 

I also found a potential issue with HADOOP-13255 where the 
{{checkTGTAndReloginFromKeytab}} is invoked with only 
{{DelegationTokenAuthenticator#authenticate}} but not 
{{KerberosAuthenticator#authenticate}}. This is not an issue now because we 
currently don't use {{KerberosAuthenticator}} directly. Only 
{{DelegationTokenAuthenticator}} or {{KerberosDelegationTokenAuthenticator}} 
are being used. Since both {{KerberosAuthenticator}} and 
{{DelegationTokenAuthenticator}} implement the {{Authenticator}} interface, it 
is good to have {{checkTGTAndReloginFromKeytab}} added to {{authenticate}} 
implementations for consistency. I will open a separate ticket for it.

cc: [~xiaochen] and [~zhz] for additional feedback.  

> HDFS namenode should always use service user (hdfs) to generateEncryptedKey
> ---
>
> Key: HDFS-10643
> URL: https://issues.apache.org/jira/browse/HDFS-10643
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10643.00.patch, HDFS-10643.01.patch
>
>
> KMSClientProvider is designed to be shared by different KMS clients. When 
> HDFS Namenode as KMS client talks to KMS to generateEncryptedKey for new file 
> creation from proxy user (hive, oozie), the proxyuser handling for 
> KMSClientProvider in this case is unnecessary, which cause 1) an extra proxy 
> user configuration allowing hdfs user to proxy its clients and 2) KMS acls to 
> allow non-hdfs user for GENERATE_EEK operation. 
> This ticket is opened to always use HDFS namenode login user (hdfs) when 
> talking to KMS to generateEncryptedKey for new file creation. This way, we 
> have a more secure KMS based HDFS encryption (we can set kms-acls to allow 
> only hdfs user for GENERATE_EEK) with less configuration hassle for KMS to 
> allow hdfs to proxy other users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-18 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383319#comment-15383319
 ] 

Virajith Jalaparti commented on HDFS-10636:
---

Uploading a new patch which contains some classes which were missing from the 
earlier patch and also fixes the TODO pointed out by [~jpallas]. 

[~jpallas], yes, agreed. If there is an implementation which uses non-{{File}} 
based local replicas, the name of {{LocalReplicaInfo}} can be changed to 
something like {{FileReplicaInfo}}. However, that will be a class rename, and 
need not involve any more extensive changes. 

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Target Version/s: 3.0.0-alpha2  (was: 2.8.0)
Hadoop Flags: Incompatible change

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383312#comment-15383312
 ] 

John Zhuge commented on HDFS-6962:
--

Thank you very much [~cnauroth]. Valid points.

One additional question before responding to your comments. I added 
{{getMasked}} and {{getUnmasked}} with default implementations to 
{{FsPermission}} which is public and stable. Is that ok? The alternative to 
this approach is to use {{instanceof}} to detect {{FsCreateModes}} object with 
an {{FsPermission}} reference.

bq. target it to the 3.x line

I think it is ok. Will it affect our plan to backport the fix to CDH branches 
based on 2.6.0?

bq. WebHDFS support

Will implement it in this jira.

bq. adding the {{createModes}} member to {{INodeWithAdditionalFields}}

Yes, I made the trade-off between adding a field to 
{{INodeWithAdditionalFields}} and changing method signatures along the NN stack 
from RPC to {{FSDirectory#copyINodeDefaultAcl}}. I will revisit the decision.

bq. LOG.warn("Received create request without unmasked create mode");

Changed to {{debug}}.

bq. "comppatible"

Fixed.

bq. add a clarifying statement that umask would be ignored 

Done.

bq. make a new test suite, similar to TestAclCLI, but with 
dfs.namenode.posix.acl.inheritance.enabled set to true.

Will do.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Attachment: HDFS-10636.002.patch

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Attachment: (was: HDFS-10636.002.patch)

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Attachment: HDFS-10636.002.patch

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HDFS-4176:
---

Assignee: Lei (Eddy) Xu  (was: Zhe Zhang)

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383300#comment-15383300
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
6s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 368 unchanged - 12 fixed = 372 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818632/HDFS-10301.009.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c2af2824bb8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c2bcffb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16081/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16081/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16081/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16081/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing 

[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383298#comment-15383298
 ] 

Andrew Wang commented on HDFS-10519:


Hi Jiayi, thanks for working on this. I took a look at the patch and have 
mostly style comments. It looks pretty good overall, appreciate the new tests.

* config key should mention "edit log" somewhere. We can reuse an existing 
prefix and call it something like "dfs.ha.tail-edits.in-progress"
* Would be good to explain a little more in hdfs-default.xml why a user might 
want to turn this on.
* EditLogTailer, regarding the comment: edits themselves are not in-progress, 
an edit log segment is. Consider renaming the boolean also.
* "isTail" is also not very descriptive, what it's really doing is bounding the 
tailing to the committed txID length right? Better to name it accordingly, 
rather than tie it to a single usecase like standby tailing.

Journal.java

* inProgress boolean is unused. Conceptually the JNs shouldn't have to be aware 
of the standby doing in-progress tailing anyway.
* getCommittedTxnIdForTests isn't used only in tests, rename it?
* Can you add a comment on the {{numTxns == 0}} early return, explaining why we 
do this?

* JournalSet, why is the correct committed txnid to set here {{0}} rather than 
the max txId from the set of logs?

QuorumOutputStream
* Let's pull the boolean out of the conf and pass it into the constructor 
rather than pass in the entire Configuration, this helps limit the scope. The 
boolean should also be named something more descriptive, like 
"updateCommittedTxnId".
* The new conditional dummy flush also needs a comment for sure. How stale 
would the committed txnId be if we didn't write this 0-len segment? Any ideas 
on how to further optimize this (e.g. make it more async)?

* QuorumJournalManager, the ternary can be written instead with Math.min yea? 
I'd prefer to see a little logic rework that avoids the need for the {{else}} 
also.

RemoteEditLogManifest
* Do we need the single-arg form of the constructor? We only use it once in a 
test, maybe just always require the two-arg form.
* Initialize committedTxnId to some invalid txn id, and validate it in 
checkState? Should also be within bounds.
* toString output should included the committedTxnId

* TestBKSR, TestFJM, you don't need to modify the call since we kept the old 
overload.
* TestNNSRManager, we should pass through the parameter in the mock, like the 
other args

TestStandbyInProgressTail

* Typo: edig
* Typo: shoudl
* Do we need the Thread.sleep for tailing? Can we trigger edit log tailing 
manually instead? Sleeps are unreliable and increase test runtime.
* Looks like some of the startup/shutdown logic is shared, can we use @Before 
and @After annotated methods to share code? As an FYI we also normally do a 
null check on cluster before calling cluster.shutdown, as an extra guard.
* New assert methods can be made static
* Any thoughts on hooking this up to the randomized fault-injection tests 
originally developed for QJM? I think we'd all have more confidence in this 
feature if we ran the fuzzer for a while and it came back clean.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10626) VolumeScanner prints incorrect IOException in reportBadBlocks operation

2016-07-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383261#comment-15383261
 ] 

Yongjun Zhang commented on HDFS-10626:
--

HI [~linyiqun],

Thanks for the new rev.

What about changing:
{code}
  LOG.warn("Reporting bad {} on {}", block, volume.getBasePath());
  try {
scanner.datanode.reportBadBlocks(block, volume);
  } catch (IOException ie) {
// This is bad, but not bad enough to shut down the scanner.
LOG.warn("Cannot report bad block {}, {}, "
+ "the exception for the bad block: {}", block, ie, e);
  }
{code}

to

{code}
  LOG.warn("Reporting bad " + block + " with volume "
  + volume.getBasePath(), e);
  try {
scanner.datanode.reportBadBlocks(block, volume);
  } catch (IOException ie) {
// This is bad, but not bad enough to shut down the scanner.
LOG.warn("Cannot report bad block " + block, ie);
  }
{code}

Notice that I changed to use a different logger api. Since this is exception 
handling, the cost increase may not be a big problem.

Thanks.


> VolumeScanner prints incorrect IOException in reportBadBlocks operation
> ---
>
> Key: HDFS-10626
> URL: https://issues.apache.org/jira/browse/HDFS-10626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10626.001.patch, HDFS-10626.002.patch, 
> HDFS-10626.003.patch
>
>
> VolumeScanner throws incorrect IOException in {{datanode.reportBadBlocks}}. 
> The related codes:
> {code}
> public void handle(ExtendedBlock block, IOException e) {
>   FsVolumeSpi volume = scanner.volume;
>   ...
>   try {
> scanner.datanode.reportBadBlocks(block, volume);
>   } catch (IOException ie) {
> // This is bad, but not bad enough to shut down the scanner.
> LOG.warn("Cannot report bad " + block.getBlockId(), e);
>   }
> }
> {code}
> The IOException that printed in the log should be {{ie}} rather than {{e}} 
> which was passed in method {{handle(ExtendedBlock block, IOException e)}}.
> It will be a important info that can help us to know why datanode 
> reporBadBlocks failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383249#comment-15383249
 ] 

Chris Nauroth commented on HDFS-6962:
-

Hello [~jzhuge].  I apologize for my delayed response.  Thank you for working 
on this tricky issue.

I think what you are proposing for configurability and extending the protocol 
messages makes sense as a way to provide deployments with a choice of which 
behavior to use.  However, I'm reluctant to push it into 2.8.0 now due to the 
complexity of the changes required to support it.  Considering something like a 
cross-cluster DistCp, with a mix of old and new versions in play, it could 
become very confusing to explain the end results to users.  Unless you consider 
it urgent for 2.8.0, would you consider targeting it to the 3.x line, as I had 
done a while ago?

I don't think we can realistically ship without the WebHDFS support in place.  
At this point, there is a user expectation of feature parity for ACL commands 
whether the target is an hdfs: path or a webhdfs: path.  If you want to track 
WebHDFS work in a separate JIRA, then I think that's fine, but I wouldn't want 
to ship a non-alpha release lacking the WebHDFS support.

I am concerned about adding the {{createModes}} member to 
{{INodeWithAdditionalFields}} because of the increased per-inode memory 
footprint in the NameNode.  Even for a {{null}}, there is still the pointer 
cost.  I assume this was done because it was the easiest way to get the masked 
vs. unmasked information passed all the way down to 
{{FSDirectory#copyINodeDefaultAcl}} during new file/directory creation.  That 
information is not valuable beyond the lifetime of the creation operation, so 
paying memory to preserve it longer is unnecessary.  I think we'll need to 
explore passing the unmasked information along separately from the inode 
object.  Unfortunately, this likely will make the change more awkward, 
requiring changes to method signatures to accept more arguments.

{code}
  if (modes == null) {
LOG.warn("Received create request without unmasked create mode");
  }
{code}

I expect this log statement would be noisy in practice.  I recommend removing 
it or changing it to debug level if you find it helpful.

The documentation of {{dfs.namenode.posix.acl.inheritance.enabled}} in 
hdfs-default.xml and HdfsPermissionsGuide.md looks good overall.  I saw one 
typo in both places: "comppatible" instead of "compatible".  Could you also add 
a clarifying statement that umask would be ignored if the parent has a default 
ACL?  It could be as simple as "...will apply default ACLs from the parent 
directory to the create mode and ignore umask."

In addition to the new tests you added to {{FSAclBaseTest}}, I recommend 
testing through the shell.  The XML-driven shell tests don't have a way to 
reconfigure the mini-cluster under test.  I expect you'll need to make a new 
test suite, similar to {{TestAclCLI}}, but with 
{{dfs.namenode.posix.acl.inheritance.enabled}} set to {{true}}.

bq. PermissionStatus#applyUMask never used, remove it?

bq. DFSClient#mkdirs and {{DFSClient#primitiveMkdir use file default if 
permission is null. Should use dir default permission?

You might consider filing separate JIRAs for these 2 observations, so that we 
keep the scope here focused on the ACL inheritance issue.


> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:rea

[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HDFS-10301:
-
Attachment: HDFS-10301.010.patch

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383224#comment-15383224
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

I have made STORAGE_REPORT {{static final}} in the 010 patch.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2016-07-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10620:
-
Attachment: HDFS-10620.002.patch

Attaching the latter patch.

> StringBuilder created and appended even if logging is disabled
> --
>
> Key: HDFS-10620
> URL: https://issues.apache.org/jira/browse/HDFS-10620
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.4
>Reporter: Staffan Friberg
> Attachments: HDFS-10620.001.patch, HDFS-10620.002.patch
>
>
> In BlockManager.addToInvalidates the StringBuilder is appended to during the 
> delete even if logging isn't active.
> Could avoid allocating the StringBuilder as well, but not sure if it is 
> really worth it to add null handling in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383187#comment-15383187
 ] 

Akira Ajisaka commented on HDFS-10620:
--

I'm in favor of the latter patch to avoid creating StringBuilder instance when 
blockLog.isDebugEnabled() is false.

> StringBuilder created and appended even if logging is disabled
> --
>
> Key: HDFS-10620
> URL: https://issues.apache.org/jira/browse/HDFS-10620
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.4
>Reporter: Staffan Friberg
> Attachments: HDFS-10620.001.patch
>
>
> In BlockManager.addToInvalidates the StringBuilder is appended to during the 
> delete even if logging isn't active.
> Could avoid allocating the StringBuilder as well, but not sure if it is 
> really worth it to add null handling in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383161#comment-15383161
 ] 

Konstantin Shvachko commented on HDFS-10301:


All-capital identifiers are reserved for constants, that is {{static final 
STORAGE_REPORT}}

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383139#comment-15383139
 ] 

Akira Ajisaka commented on HDFS-10645:
--

+1 for the idea. Some comments from me:

1. Can we use Set for blockReportSizes instead of int[]? That way we 
can get the max value by just calling {{Collections.max}}. (and need to 
synchronize the set)
2. Would you remove unused imports from TestDataNodeMXBean?
3. Would you remove the following unnecessarily comments from the test?
{code}
  LOG.info("yuanbo print here " + dn.getBPServiceActorInfo());
{code}
Logging is good, but do not print your name in any source code.

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, Selection_047.png, 
> Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10648) Expose Balancer metrics through Metrics2

2016-07-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10648:
-
Labels: metrics  (was: )

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: Mark Wagner
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode

2016-07-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-1312:

Release Note: The Disk Balancer lets administrators rebalance data across 
multiple disks of a DataNode. It is useful to correct skewed data distribution 
often seen after adding or replacing disks. Disk Balancer can be enabled by 
setting dfs.disk.balancer.enabled to true in hdfs-site.xml. It can be invoked 
by running "hdfs diskbalancer". See the "HDFS Diskbalancer"  section in the 
HDFS Commands guide for detailed usage.  (was: The Disk Balancer lets 
administrators rebalance data across multiple disks of a DataNode. It is useful 
to correct skewed data distribution often after adding or replacing disks. Disk 
Balancer can be enabled by setting dfs.disk.balancer.enabled to true in 
hdfs-site.xml. It can be invoked by running "hdfs diskbalancer". See the "HDFS 
Diskbalancer"  section in the HDFS Commands guide for detailed usage.)

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Fix For: 3.0.0-alpha1
>
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, 
> HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, 
> HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10648) Expose Balancer metrics through Metrics2

2016-07-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10648:
-
Component/s: balancer & mover

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: Mark Wagner
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10648) Expose Balancer metrics through Metrics2

2016-07-18 Thread Mark Wagner (JIRA)
Mark Wagner created HDFS-10648:
--

 Summary: Expose Balancer metrics through Metrics2
 Key: HDFS-10648
 URL: https://issues.apache.org/jira/browse/HDFS-10648
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Mark Wagner


The Balancer currently prints progress information to the console. For 
deployments that run the balancer frequently, it would be helpful to collect 
those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-07-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382995#comment-15382995
 ] 

Andrew Wang commented on HDFS-1312:
---

Thanks Arpit, I've added similar text over at HADOOP-13383 for the website 
notes.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Fix For: 3.0.0-alpha1
>
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, 
> HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, 
> HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382959#comment-15382959
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

Attached a new patch (009) addressing Konstantin's comments. I cannot make 
STORAGE_REPORT final since it needs to be referenced from a static context. 
Instead, I renamed it to 'Storage_Report'. 

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-18 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HDFS-10301:
-
Attachment: HDFS-10301.009.patch

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-18 Thread Joe Pallas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382860#comment-15382860
 ] 

Joe Pallas commented on HDFS-10636:
---

I haven't reviewed every line, but I noticed in 
{{FsDatasetUtil.createNullChecksumFile}} there's a {{// TODO Auto-generated 
catch block}}.

Overall I like this, and that it addresses some of the issues raised in 
HDFS-5194.

One question about naming:  In the context of HDFS-9806, the distinction 
between Provided and Local makes sense.  But there might be other replica 
implementations that use non-{{File}} based storage but are still "local".  I 
don't have a concrete alternative to "Local" to propose, though.

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382819#comment-15382819
 ] 

Lei (Eddy) Xu commented on HDFS-10633:
--

Thanks a lot for the patch and reviews, [~linyiqun] and [~ajisakaa]

[~linyiqun] Would you rephrase this sentence "The percentage that disk 
tolerance that we are ok with".  I am confused about what  "disk tolerance" is 
and what is the "percentage" of that. Or do you mean "the percentage of disk 
space"? 

Thanks.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-07-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382793#comment-15382793
 ] 

Arpit Agarwal commented on HDFS-1312:
-

Done.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Fix For: 3.0.0-alpha1
>
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, 
> HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, 
> HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode

2016-07-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-1312:

Release Note: The Disk Balancer lets administrators rebalance data across 
multiple disks of a DataNode. It is useful to correct skewed data distribution 
often after adding or replacing disks. Disk Balancer can be enabled by setting 
dfs.disk.balancer.enabled to true in hdfs-site.xml. It can be invoked by 
running "hdfs diskbalancer". See the "HDFS Diskbalancer"  section in the HDFS 
Commands guide for detailed usage.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Fix For: 3.0.0-alpha1
>
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, 
> HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, 
> HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382784#comment-15382784
 ] 

Akira Ajisaka commented on HDFS-10633:
--

+1, thanks [~linyiqun] for the update. Hi [~eddyxu], would you please review it?

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382770#comment-15382770
 ] 

Akira Ajisaka commented on HDFS-10647:
--

hadoop-project/src/site/site.xml should be changed.

> Add a link to HDFS disk balancer document in site.xml
> -
>
> Key: HDFS-10647
> URL: https://issues.apache.org/jira/browse/HDFS-10647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>  Labels: newbie
>
> We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10647) Add a link to HDFS disk balancer document in site.xml

2016-07-18 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-10647:


 Summary: Add a link to HDFS disk balancer document in site.xml
 Key: HDFS-10647
 URL: https://issues.apache.org/jira/browse/HDFS-10647
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira Ajisaka


We have HDFS disk balancer document but it's not linked from the top page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-07-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382756#comment-15382756
 ] 

Andrew Wang commented on HDFS-1312:
---

Could someone (Arpit? Anu? Eddy?) add release notes for this feature? Thanks!

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Fix For: 3.0.0-alpha1
>
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, 
> HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, 
> HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10635) expected/actual parameters inverted in TestGlobPaths assertEquals

2016-07-18 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382644#comment-15382644
 ] 

Chen Liang commented on HDFS-10635:
---

Thanks Steve, I'll keep an eye on that for now then.

> expected/actual parameters inverted in TestGlobPaths assertEquals
> -
>
> Key: HDFS-10635
> URL: https://issues.apache.org/jira/browse/HDFS-10635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Minor
>
> Pretty much all the assertEquals clauses in {{TestGlobPaths}} place the 
> actual value first, expected second. That's the wrong order and will lead to 
> misleading messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10635) expected/actual parameters inverted in TestGlobPaths assertEquals

2016-07-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382633#comment-15382633
 ] 

Steve Loughran commented on HDFS-10635:
---

Chen, before you start this, wait until HADOOP-13371 is done, or at least have 
a patch up. There's enough common code that you can cut out all the equals 
checks from every test, leaving something much more minimal where you just 
provide the pattern, the files and the lists of indexes in the matchedPath 
values:

{code}
  @Test
  public void testNestedCurlyBracket() throws Throwable {
String[] files = {
userDir + "/a.abcxx",
userDir + "/a.abdxy",
userDir + "/a.hlp",
userDir + "/a.jhyy"
};
assertMatchOperation(userDir + "/a.{ab{c,d},jh}??", files, 0, 1, 3);
  }

  public Path[] assertMatchOperation(String pattern,
  String[] files,
  int... matchIndices)
  throws IOException {
Path[] matchedPaths = prepareTesting(pattern, files);
int expectedLength = matchIndices.length;
StringBuilder builder = new StringBuilder(
expectedLength * 128);
builder.append("Expected Paths\n");
for (int index : matchIndices) {
  if (index < path.length) {
builder.append(
String.format("  [%d] %s\n", index, path[index]));
  }
}
Joiner j = Joiner.on("\n  ");
builder.append("\nMatched paths:\n  ");
j.appendTo(builder, matchedPaths);
assertEquals(builder.toString(), expectedLength, matchedPaths.length);
for (int i = 0; i < matchedPaths.length; i++) {
  int expectedIndex = matchIndices[i];
  Path expectedPath = path[expectedIndex];
  assertEquals(String.format("Element %d: in %s", i, builder.toString()),
  expectedPath, matchedPaths[i]);
}
return matchedPaths;
  }
{code}


> expected/actual parameters inverted in TestGlobPaths assertEquals
> -
>
> Key: HDFS-10635
> URL: https://issues.apache.org/jira/browse/HDFS-10635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Minor
>
> Pretty much all the assertEquals clauses in {{TestGlobPaths}} place the 
> actual value first, expected second. That's the wrong order and will lead to 
> misleading messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10598) DiskBalancer does not execute multi-steps plan.

2016-07-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382630#comment-15382630
 ] 

Arpit Agarwal commented on HDFS-10598:
--

Hi [~eddyxu], I will review in a couple of days as the fix needs some thought. 
Last and this week are very busy for me.

> DiskBalancer does not execute multi-steps plan.
> ---
>
> Key: HDFS-10598
> URL: https://issues.apache.org/jira/browse/HDFS-10598
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10598.00.patch
>
>
> I set up a 3 DN node cluster, each one with 2 small disks.  After creating 
> some files to fill HDFS, I added two more small disks to one DN.  And run the 
> diskbalancer on this DataNode.
> The disk usage before running diskbalancer:
> {code}
> /dev/loop0  3.9G  2.1G  1.6G 58%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  17M  3.6G 1%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%  /mnt/data4
> {code}
> However, after running diskbalancer (i.e., {{-query}} shows {{PLAN_DONE}})
> {code}
> /dev/loop0  3.9G  1.2G  2.5G 32%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  953M  2.7G 26%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%   /mnt/data4
> {code}
> It is suspicious that in {{DiskBalancerMover#copyBlocks}}, every return does 
> {{this.setExitFlag}} which prevents {{copyBlocks()}} be called multiple times 
> from {{DiskBalancer#executePlan}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10635) expected/actual parameters inverted in TestGlobPaths assertEquals

2016-07-18 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-10635:
-

Assignee: Chen Liang

> expected/actual parameters inverted in TestGlobPaths assertEquals
> -
>
> Key: HDFS-10635
> URL: https://issues.apache.org/jira/browse/HDFS-10635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chen Liang
>Priority: Minor
>
> Pretty much all the assertEquals clauses in {{TestGlobPaths}} place the 
> actual value first, expected second. That's the wrong order and will lead to 
> misleading messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10598) DiskBalancer does not execute multi-steps plan.

2016-07-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382604#comment-15382604
 ] 

Lei (Eddy) Xu commented on HDFS-10598:
--

Ping [~arpitagarwal] , would you mind to give a review on this patch? Thanks!

> DiskBalancer does not execute multi-steps plan.
> ---
>
> Key: HDFS-10598
> URL: https://issues.apache.org/jira/browse/HDFS-10598
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10598.00.patch
>
>
> I set up a 3 DN node cluster, each one with 2 small disks.  After creating 
> some files to fill HDFS, I added two more small disks to one DN.  And run the 
> diskbalancer on this DataNode.
> The disk usage before running diskbalancer:
> {code}
> /dev/loop0  3.9G  2.1G  1.6G 58%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  17M  3.6G 1%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%  /mnt/data4
> {code}
> However, after running diskbalancer (i.e., {{-query}} shows {{PLAN_DONE}})
> {code}
> /dev/loop0  3.9G  1.2G  2.5G 32%  /mnt/data1
> /dev/loop1  3.9G  2.6G  1.1G 71%  /mnt/data2
> /dev/loop2  3.9G  953M  2.7G 26%  /mnt/data3
> /dev/loop3  3.9G  17M  3.6G 1%   /mnt/data4
> {code}
> It is suspicious that in {{DiskBalancerMover#copyBlocks}}, every return does 
> {{this.setExitFlag}} which prevents {{copyBlocks()}} be called multiple times 
> from {{DiskBalancer#executePlan}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-07-18 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382547#comment-15382547
 ] 

Inigo Goiri commented on HDFS-10467:


Thanks [~vinayrpet] for the comments.

bq. Why cant we use DFSClient with HA here directly? Like how current clients 
connects to HA, each subcluster can be connected to a subcluster using a HA 
configured DFSClient. DFSClient itself will handle switching between NNs in 
case of failover. This DFSClient can be kept as-is and re-used later for next 
requests on same subcluster. So it should know the current active namenode.

To provide a fully federated view, we think is best to track the state of all 
the Namenodes. In this way, we can expose the federation view in the web UI. 
Given that we have this information, we can use this information as hints for 
the clients. Actually, there was some discussion in HDFS-7858 regarding using 
the information in ZK to go faster to the Active namenode. This was discarded 
because of the additional complexity. I think this might be a good opportunity 
to go in that direction. Our current implementation (using the Active hint) is 
faster than the regular fail over and produces less load than the hedging 
approach.

bq. How about supporting a default Mount-point for '/' as well. Could be 
optional also? Instead of rejecting requests for the paths which doesnt match 
with other other mountpoints.
There might be some usecases where there might be multiple first level 
directories other than mounted path. Which could go under /.

Yes, this is a common use case. We already support a default / set using 
{{dfs.router.default.nameserviceId}}. We may want to make it more 
explicit/clear.



> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.001.patch, 
> HDFS-10467.PoC.patch, HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-07-18 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382535#comment-15382535
 ] 

Inigo Goiri commented on HDFS-10467:


[~mingma], thank you for the comments. A few answers/clarifictions.

bq. Support for mergeFs HADOOP-8298. We should be able to extend the design to 
support this.There might be some issues around how to provision a new sub 
folder (which namespace should own that) and how it works with rebalancer. This 
could be a good addition for future work section.

In the prototype we actually started this but we haven't gone into testing with 
it. In addition, I think merge points go a little bit on the direction of N-Fly 
in HADOOP-12077. I think we should support both of them together. I'll add the 
reference explicitly to the document.

bq. Handling of inconsistent state. Given routers cache which namenodes are 
active, the state could be different from the actual namenode at that moment. 
Thus routers might get {{StandbyException}} and need to retry on another 
namenode. If so, does it mean the routers should leverage ipc 
{{FailoverOnNetworkExceptionRetry}} or use {{DFSClient}} with hint for active 
namenode?

In the current implementation we use the client with the hint. We first try the 
one marked as active in the State Store and we capture {{StandbyExceptions}} 
etc. This is in HDFS-10629 in {{RouterRpcServer#invokeMethod()}}.

bq. Soft state vs hard state. while subcluster active namenode machine and 
load/space are soft state that can be reconstructed from namenodes; mount table 
is hard state that need to be persisted. Is there any benefit separating them 
out to use different state stores as they have different persistence 
requirement, access patterns(mount table does't change much while load/space 
update is frequent) and admin interface? For example, admin might want to 
update mount table on demand; but not load/space state.

True, this is easy to implement right now. We should see if people is OK with 
the additional complexity of configuring two backends. I guess we can discuss 
in HDFS-10630.

bq. Usage of subcluster load/space state. Is it correct that the only consumer 
of subcluster's load/space state is the rebalancer? I image initially we would 
run rebalancer manually. For that, the rebalancer can just pull subcluster's 
load/space state from namenodes on demand. Then we don't have to store 
subcluster load/space state in state store.

Correct. Right now we are not even storing load/space data in the State Store. 
Actually in our Rebalancer prototypes, we are collecting the space externally. 
For now, we will keep the usage state out of the State Store and once we go 
into the Rebalancer, we can discuss what's best.

bq. Admin's modification of mount table. Besides rebalancer, admin might want 
to update mount table during cluster initial setup as well as addition of new 
namespace with new mount entry. If we continue to use mounttable.xml, then 
admins can push the update the same way as viewFs setup. If we use ZK store, 
them we need to provide tools to update state store.

Right now, our admin tool goes through the Routers to modify the mount table. 
We could also go directly to the State Store. I just created HDFS-10646 to 
develop this.

bq. What is the performance optimization in your latest patch, based on async 
RPC client?

Our current optimization is based on being able to use more sockets. The 
current client has a single thread pool per connection and we were limited by 
this. We haven't explored async extensively but we are not yet sure it will 
give us the performance we need. We need to explore this.

I'll update the document accordingly.

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.001.patch, 
> HDFS-10467.PoC.patch, HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10646) Federation admin tool

2016-07-18 Thread Inigo Goiri (JIRA)
Inigo Goiri created HDFS-10646:
--

 Summary: Federation admin tool
 Key: HDFS-10646
 URL: https://issues.apache.org/jira/browse/HDFS-10646
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Inigo Goiri


Tools for administrators to manage HDFS federation. This includes managing the 
mount table and decommission subclusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10619) Cache path in InodesInPath

2016-07-18 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382519#comment-15382519
 ] 

Daryn Sharp commented on HDFS-10619:


The common case is {{IIP#getPath}} is called frequently and repeatedly.  So 
even if it doesn't happen to be called in a few instances, the extra string 
overhead (valid concern in general) outweighs the huge reduction in object 
allocations for the common case.

Sorry the test broke, that's what I get for separating out a small piece from a 
larger patch that has modified that byte[][] to String function.  It has much 
larger logic problems than the one uncovered here.  I'll repost.

> Cache path in InodesInPath
> --
>
> Key: HDFS-10619
> URL: https://issues.apache.org/jira/browse/HDFS-10619
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10619.patch
>
>
> INodesInPath#getPath, a frequently called method, dynamically builds the 
> path.  IIP should cache the path upon construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9271) Implement basic NN operations

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382323#comment-15382323
 ] 

Hadoop QA commented on HDFS-9271:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
22s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
39s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818551/HDFS-9271.HDFS-8707.005.patch
 |
| JIRA Issue | HDFS-9271 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 01d167dca69e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d18e396 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16080/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16080/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.

[jira] [Commented] (HDFS-10641) Fix intermittently failing TestBlockManager#testBlockReportQueueing

2016-07-18 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382288#comment-15382288
 ] 

Rushabh S Shah commented on HDFS-10641:
---

Copying the stack trace and the test logs in the jira just in case if the build 
#9996 gets removed and if its only failing intermittently.
{noformat}
Stacktrace

java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)

Standard Output

2016-07-13 23:58:32,591 [main] INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:(249)) - dfs.block.invalidate.limit=1000
2016-07-13 23:58:32,591 [main] INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:(255)) - 
dfs.namenode.datanode.registration.ip-hostname-check=true
2016-07-13 23:58:32,591 [main] INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(72)) - 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-07-13 23:58:32,591 [main] INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(77)) - The block deletion will 
start around 2016 Jul 13 23:58:32
2016-07-13 23:58:32,591 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map 
BlocksMap
2016-07-13 23:58:32,591 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(396)) - VM type   = 64-bit
2016-07-13 23:58:32,592 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-07-13 23:58:32,592 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(402)) - capacity  = 2^22 = 4194304 
entries
2016-07-13 23:58:32,597 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:createBlockTokenSecretManager(454)) - 
dfs.block.access.token.enable=false
2016-07-13 23:58:32,597 [main] INFO  blockmanagement.BlockManagerSafeMode 
(BlockManagerSafeMode.java:(150)) - dfs.namenode.safemode.threshold-pct = 
0.999128746033
2016-07-13 23:58:32,597 [main] INFO  blockmanagement.BlockManagerSafeMode 
(BlockManagerSafeMode.java:(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManagerSafeMode 
(BlockManagerSafeMode.java:(153)) - dfs.namenode.safemode.extension = 
3
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(440)) - defaultReplication = 3
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(441)) - maxReplication = 512
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(442)) - minReplication = 1
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(443)) - maxReplicationStreams  = 2
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(444)) - replicationRecheckInterval = 3000
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(445)) - encryptDataTransfer= false
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:(446)) - maxNumBlocksToLog  = 1000
2016-07-13 23:58:32,598 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s6 for DN 
6.6.6.6:9866
2016-07-13 23:58:32,599 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s5 for DN 
5.5.5.5:9866
2016-07-13 23:58:32,599 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s4 for DN 
4.4.4.4:9866
2016-07-13 23:58:32,599 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s3 for DN 
3.3.3.3:9866
2016-07-13 23:58:32,599 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s2 for DN 
2.2.2.2:9866
2016-07-13 23:58:32,599 [main] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s1 for DN 
1.1.1.1:9866
2016-07-13 23:58:32,613 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:(465)) - starting cluster: numNameNodes=1, 
numDataNodes=1
Formatting using clusterid: testClusterID
2016-07-13 23:58:32,616 [main] INFO  namenode.FSEditLog 
(FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-07-13 23:58:32,616 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(703)) - KeyProvider: null
2016-07-13 23:58:32,617 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(710)) - fsLock is fair:true
2016-07-13 23:58:32,617 [main] 

[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-07-18 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-9271:

Attachment: HDFS-9271.HDFS-8707.005.patch

Error in doTestGetDefaultBlockSize happened in one more test. Fixed in the new 
patch attached.

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch, HDFS-9271.HDFS-8707.002.patch, 
> HDFS-9271.HDFS-8707.003.patch, HDFS-9271.HDFS-8707.004.patch, 
> HDFS-9271.HDFS-8707.005.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382185#comment-15382185
 ] 

Hadoop QA commented on HDFS-10645:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 31 unchanged - 0 fixed = 38 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.datanode.TestDatanodeRegister |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.TestSafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818521/HDFS-10645.001.patch |
| JIRA Issue | HDFS-10645 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 06503efec006 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b4a708 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16079/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16079/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16079/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message wa

[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-07-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382072#comment-15382072
 ] 

Vinayakumar B commented on HDFS-10467:
--

Hi, 
This will be nice addition to Federation. 

Apparently, I was also working on similar feature, which almost has same design.

Design looks great, I have some comments though.

{quote}3.3.2 Namenode heartbeat HA
For high availability and flexibility, multiple Routers can monitor the same 
Namenode and heartbeat the information to the State Store{quote}
{quote}If a Router tries to contact the active Namenode but is unable to do it, 
the Router will try the other
Namenodes in the subcluster.{quote}
Why cant we use DFSClient with HA here directly? Like how current clients 
connects to HA, each subcluster can be connected to a subcluster using a HA 
configured DFSClient. DFSClient itself will handle switching between NNs in 
case of failover. This DFSClient can be kept as-is and re-used later for next 
requests on same subcluster. So it should know the current active namenode.
By doing this, there will not be any need of Heartbeat between Router and 
NameNode to monitor the NameNode status.

bq. MountTable
How about supporting a default Mount-point for '/' as well. Could be optional 
also? Instead of rejecting requests for the paths which doesnt match with other 
other mountpoints.
There might be some usecases where there might be multiple first level 
directories other than mounted path. Which could go under /.
Similar to Linux's FileSystem mounts.

I will try to review the code on sub jiras.


> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.001.patch, 
> HDFS-10467.PoC.patch, HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Status: Patch Available  (was: Open)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, Selection_047.png, 
> Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Attachment: HDFS-10645.001.patch

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, Selection_047.png, 
> Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382053#comment-15382053
 ] 

Yuanbo Liu edited comment on HDFS-10645 at 7/18/16 10:33 AM:
-

[~cheersyang] Good point. I submitted a new snapshot(Selection_048.png) and v1 
patch.


was (Author: yuanbo):
[~cheersyang] Good point. I submitted a new snapshot and v1 patch.

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Attachment: Selection_048.png

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382053#comment-15382053
 ] 

Yuanbo Liu commented on HDFS-10645:
---

[~cheersyang] Good point. I submitted a new snapshot and v1 patch.

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15381938#comment-15381938
 ] 

Weiwei Yang commented on HDFS-10645:


It would be good to expose the BR size in datanode metrics, so it would be 
possible for users to monitor the size (vs protobuf limit), build some alerts 
when it exceed a threshold. It is really annoying when DN failed to send BR 
when it gets too big, and it's hard to figure out why. Add it on UI is also 
good, but need to think how to display it, e.g

|| Last Block Report Size (Max Size) ||
| 32mb (64mb) |


> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Attachment: (was: Selection_046.png)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Attachment: Selection_047.png

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_047.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Description: Record block report size as a metric and show it on datanode 
UI. It's important for administrators to know the bottleneck of  block report, 
and the metric is also a good tuning metric.  (was: Add a new metric called 
"Max block report size". It's important for administrators to know the 
bottleneck of  block report, and the metric is also a good tuning metric.)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_046.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15381802#comment-15381802
 ] 

Yuanbo Liu edited comment on HDFS-10645 at 7/18/16 8:52 AM:


If the cluster grows big enough, it will hit this error:
{code}
org.apache.hadoop.ipc.RemoteException: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the 
size limit.
{code}
Apparently the block report size exceed the limit of PB, and the blocks in the 
data directory will be marked as unavailable in namespace. This is a bad sign 
for the cluster despite of 3 replications. It's will be better if the 
administrators get the "block report size" in time. So I propose to add this 
metric to datanode web ui.


was (Author: yuanbo):
If the cluster grows big enough, it will hit this error:
{code}
org.apache.hadoop.ipc.RemoteException: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the 
size limit.
{code}
Apparently the block report size exceed the limit of PB, and the blocks in the 
data directory will be marked as unavailable in namespace. This is a bad sign 
for the cluster despite of 3 replications. It's will be better if the 
administrators get the "Max block report size" in time. So I propose to add 
this metric to datanode web ui.

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_046.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Summary: Make block report size as a metric and add this metric to datanode 
web ui  (was: Make max block report size as a metric and add this metric to 
datanode web ui)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: Selection_046.png
>
>
> Add a new metric called "Max block report size". It's important for 
> administrators to know the bottleneck of  block report, and the metric is 
> also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org